TopPodcast.com
Menu
  • Home
  • Top Charts
  • Top Networks
  • Top Apps
  • Top Independents
  • Top Podfluencers
  • Top Picks
    • Top Business Podcasts
    • Top True Crime Podcasts
    • Top Finance Podcasts
    • Top Comedy Podcasts
    • Top Music Podcasts
    • Top Womens Podcasts
    • Top Kids Podcasts
    • Top Sports Podcasts
    • Top News Podcasts
    • Top Tech Podcasts
    • Top Crypto Podcasts
    • Top Entrepreneurial Podcasts
    • Top Fantasy Sports Podcasts
    • Top Political Podcasts
    • Top Science Podcasts
    • Top Self Help Podcasts
    • Top Sports Betting Podcasts
    • Top Stocks Podcasts
  • Podcast News
  • About Us
  • Podcast Advertising
  • Contact
Not in our directory?
Add Show Here
Podcast Equipment
Center

toppodcastlogoOur TOPPODCAST Picks

  • Comedy
  • Crypto
  • Sports
  • News
  • Politics
  • True Crime
  • Business
  • Finance

Follow Us

toppodcastlogoStay Connected

    View Top 200 Chart
    Back to Rankings Page
    Technology

    Experiencing Data with Brian T. O’Neill

    Wondering how today’s data product leaders are designing machine learning and analytics solutions that are useful, usable, and valuable to their businesses? How do the disciplines of UX design, data product management, data science, and analytics dance together to build valuable, usable, and indispensable decision support applications?

    One of the biggest challenges of today’s leaders is that adoption of analytics and ML solutions by users continues to be low or even non-existent. My name is Brian T. O’Neill, and on Experiencing Data, I offer you a designer’s perspective on why simply developing ML models, dashboards, and apps—outputs—aren’t enough to drive meaningful user and business outcomes with data.

    Through solo episodes and interviews with data product leaders, I explore how teams are integrating product-oriented methodologies and UX design to ensure their data-driven applications will get used in the last mile. After all, you can’t create business value if the humans in the loop won’t use the “solution.” I also feature special episodes on XAI, how ML model explainability and interpretability is tied into design and UX, and why data visualization alone isn’t sufficient to produce human-centered applications and user interfaces.

    Whether you work in product at a B2B software company, or you build internal data products for a traditional enterprise, join me as I dig into what’s working—and what isn’t—when it comes to designing simple, valuable, human-centered data products.

    Hashtag: #ExperiencingData.

    PODCAST HOMEPAGE:
    For 1-page summaries and full text transcripts, join my Insights mailing list on the podcast homepage:
    https://designingforanalytics.com/ed

    ABOUT THE HOST, BRIAN T. O’NEILL:

    About Me – Brian T. O’Neill

    Advertise

    Copyright: © 2019 Designing for Analytics, LLC

    • Apple Podcasts
    • Google Play
    • Spotify

    Latest Episodes:
    109 - The Role of Product Management and Design in Turning ML/AI into a Valuable Business with Bob Mason from Argon Ventures Jan 24, 2023

    Today I’m chatting with Bob Mason, Managing Partner at Argon Ventures. Bob is a VC who seeks out early-stage founders in the ML/AI space and helps them inform their go-to-market, product, and design strategies. In this episode, Bob reveals what he looks for in early-stage data and intelligence startups who are trying to leverage ML/AI. He goes on to explain why it’s important to identify what your strengths are and what you enjoy doing so you can surround yourself with the right team. Bob also shares valuable insight into how to earn trust with potential customers as an early-stage startup, how design impacts a product’s success, and his strategy for differentiating yourself and creating a valuable product outside of the ubiquitous “platform play.”

    Highlights/ Skip to:

    • Bob explains why and how Argon Ventures focuses their investments in intelligent industry companies (00:53)
    • Brian and Bob discuss the importance of prioritizing go-to-market strategy over technology (03:42)
    • How Bob views the career progression from data science to product management, and the ways in which his own career has paralleled that journey (07:21)
    • The role customer adoption and user experience play for Bob and the companies he invests in, both pre-investment and post-investment (11:10)
    • Brian and Bob discuss the design capabilities of different teams and why Bob feels it’s something leaders need to keep top of mind (15:25)
    • Bob explains his recommendation to seek out quick wins for AI companies who can’t expect customers to wait for an ROI (19:09)
    • The importance Bob sees in identifying early adopters during a sales cycle for early-stage startups (21:34)
    • Bob describes how being customer-centric allows start-ups to build trust, garner quick wins, and inform their product strategy (23:42)
    • Bob and Brian dive into Bob’s belief that solving intrinsic business problems by vertical increases a start-up’s chance of success substantially over “the platform play” (27:29)
    • Bob gives insight into product trends he believes are going to be extremely impactful in the near future (29:05)
    Quotes from Today’s Episode
    • “In a former life, I was a software engineer, founder, and CTO myself, so I have to watch myself to not just geek out on the technology itself because the most important element when you’re determining if you want to move forward with investment or not, is this: is there a real problem here to be solved or is this technology in search of a problem?” — Bob Mason (01:51)
    • “User-centric research is really valuable, particularly at the earliest stages. If you’re just off by a degree or two, several years down the road, that can be a really material roadblock that you hit. And so, starting off on the right foot, I think is super, super valuable.” – Bob Mason (06:12)
    • “I don’t think the technical folks in an early-stage startup absolve themselves of not being really intimately involved with their go-to-market and who they’re ultimately creating value for.” – Bob Mason (07:07)
    • “When we’re making an investment decision, startups don’t generally have any customers, and so we don’t necessarily use the signal of long-term customer adoption as a driver for our initial investment decision. But it’s very much top of mind after investment and as we’re trying to build and bring the first version of the product to market. Being very thoughtful and mindful of sort of customer experience and long-term adoption is absolutely critical.” – Bob Mason (11:23)
    • “If you’re a scientist, the way you’re presenting both raw data and sort of summaries of data could be quite different than if you’re working with a business analyst that’s a few years out of college with a liberal arts degree. How you interpret results and then present those results, I think, is actually a very interesting design problem.” – Bob Mason (18:40)
    • “I think initially, a lot of early AI startups just kind of assumed that customers would be patient and let the system run, [waiting] 3, 6, 9, 12 months [to get this] magical ROI, and that’s just not how people (buyers) operate.” – Bob Mason (21:00)
    • “Re: platform plays: Obviously, you could still create a tremendous platform that’s very broad, but we think if you focus on the business problem of that particular vertical or domain, that actually creates a really powerful wedge so you can increase your value proposition. You could always increase the breadth of a platform over time. But if you’re not solving that intrinsic problem at the very beginning, you may never get the chance to survive.” – Bob Mason (28:24)
    Links
    • Argon Ventures: https://argon.vc/
    • LinkedIn: https://www.linkedin.com/in/robertmason/details/experience/
    • Email: bob@argon.vc

    108 - Google Cloud’s Bruno Aziza on What Makes a Good Customer-Obsessed Data Product Manager Jan 10, 2023

    Today I’m chatting with Bruno Aziza, Head of Data & Analytics at Google Cloud. Bruno leads a team of outbound product managers in charge of BigQuery, Dataproc, Dataflow and Looker and we dive deep on what Bruno looks for in terms of skills for these leaders. Bruno describes the three patterns of operational alignment he’s observed in data product management, as well as why he feels ownership and customer obsession are two of the most important qualities a good product manager can have. Bruno and I also dive into how to effectively abstract the core problem you’re solving, as well as how to determine whether a problem might be solved in a better way.

    Highlights / Skip to:

    • Bruno introduces himself and explains how he created his “CarCast” podcast (00:45)
    • Bruno describes his role at Google, the product managers he leads, and the specific Google Cloud products in his portfolio (02:36)
    • What Bruno feels are the most important attributes to look for in a good data product manager (03:59)
    • Bruno details how a good product manager focuses on not only the core problem, but how the problem is currently solved and whether or not that’s acceptable (07:20)
    • What effective abstracting the problem looks like in Bruno’s view and why he positions product management as a way to help users move forward in their career (12:38)
    • Why Bruno sees extracting value from data as the number one pain point for data teams and their respective companies (17:55)
    • Bruno gives his definition of a data product (21:42)
    • The three patterns Bruno has observed of operational alignment when it comes to data product management (27:57)
    • Bruno explains the best practices he’s seen for cross-team goal setting and problem-framing (35:30)
    Quotes from Today’s Episode
    • “What’s happening in the industry is really interesting. For people that are running data teams today and listening to us, the makeup of their teams is starting to look more like what we do [in] product management.” — Bruno Aziza (04:29)
    • “The problem is the problem, so focus on the problem, decompose the problem, look at the frictions that are acceptable, look at the frictions that are not acceptable, and look at how by assembling a solution, you can make it most seamless for the individual to go out and get the job done.” – Bruno Aziza (11:28)
    • “As a product manager, yes, we’re in the business of software, but in fact, I think you’re in the career management business. Your job is to make sure that whatever your customer’s job is that you’re making it so much easier that they, in fact, get so much more done, and by doing so they will get promoted, get the next job.” – Bruno Aziza (15:41)
    • “I think that is the task of any technology company, of any product manager that’s helping these technology companies: don’t be building a product that’s looking for a problem. Just start with the problem back and solution from that. Just make sure you understand the problem very well.” (19:52)
    • “If you’re a data product manager today, you look at your data estate and you ask yourself, ‘What am I building to save money? When am I building to make money?’ If you can do both, that’s absolutely awesome. And so, the data product is an asset that has been built repeatedly by a team and generates value out of data.” – Bruno Aziza (23:12)
    • “[Machine learning is] hard because multiple teams have to work together, right? You got your business analyst over here, you’ve got your data scientists over there, they’re not even the same team. And so, sometimes you’re struggling with just the human aspect of it.” (30:30)
    • “As a data leader, an IT leader, you got to think about those soft ways to accomplish the stuff that’s binary, that’s the hard [stuff], right? I always joke, the hard stuff is the soft stuff for people like us because we think about data, we think about logic, we think, ‘Okay if it makes sense, it will be implemented.’ For most of us, getting stuff done is through people. And people are emotional, how can you express the feeling of achieving that goal in emotional value?” – Bruno Aziza (37:36)
    Links
    • As referenced by Bruno, “Good Product Manager/Bad Product Manager”: https://a16z.com/2012/06/15/good-product-managerbad-product-manager/
    • LinkedIn: https://www.linkedin.com/in/brunoaziza/
    • Bruno’s Medium Article on Competing Against Luck by Clayton M. Christensen: https://brunoaziza.medium.com/competing-against-luck-3daeee1c45d4
    • The Data CarCast on YouTube: https://www.youtube.com/playlist?list=PLRXGFo1urN648lrm8NOKXfrCHzvIHeYyw

    107 - Tom Davenport on Data Product Management and the Impact of a Product Orientation on Enterprise Data Science and ML Initiatives Dec 27, 2022

    Today I’m chatting with returning guest Tom Davenport, who is a Distinguished Professor at Babson College, a Visiting Professor at Oxford, a Research Fellow at MIT, and a Senior Advisor to Deloitte’s AI practice. He is also the author of three new books (!) on AI and in this episode, we’re discussing the role of product orientation in enterprise data science teams, the skills required, what he’s seeing in the wild in terms of teams adopting this approach, and the value it can create. Back in episode 26, Tom was a guest on my show and he gave the data science/analytics industry an approximate “2 out of 10” rating in terms of its ability to generate value with data. So, naturally, I asked him for an update on that rating, and he kindly obliged. How are you all doing? Listen in to find out!

    Highlights / Skip to:

    • Tom provides an updated rating (between 1-10) as to how well he thinks data science and analytics teams are doing these days at creating economic value (00:44)
    • Why Tom believes that “motivation is not enough for data science work” (03:06)
    • Tom provides his definition of what data products are and some opinions on other industry definitions (04:22)
    • How Tom views the rise of taking a product approach to data roles and why data products must be tied to value (07:55)
    • Tom explains why he feels top down executive support is needed to drive a product orientation (11:51)
    • Brian and Tom discuss how they feel companies should prioritize true data products versus more informal AI efforts (16:26)
    • The trends Tom sees in the companies and teams that are implementing a data product orientation (19:18)
    • Brian and Tom discuss the models they typically see for data teams and their key components (23:18)
    • Tom explains the value and necessity of data product management (34:49)
    • Tom describes his three new books (39:00)
    Quotes from Today’s Episode
    • “Data science in general, I think has been focused heavily on motivation to fit lines and curves to data points, and that particular motivation certainly isn’t enough in that even if you create a good model that fits the data, it doesn’t mean at all that is going to produce any economic value.” – Tom Davenport (03:05)
    • “If data scientists don’t worry about deployment, then they’re not going to be in their jobs for terribly long because they’re not providing any value to their organizations.” – Tom Davenport (13:25)
    • “Product also means you got to market this thing if it’s going to be successful. You just can’t assume because it’s a brilliant algorithm with capturing a lot of area under the curve that it’s somehow going to be great for your company.” – Tom Davenport (19:04)
    • “[PM is] a hard thing, even for people in non-technical roles, because product management has always been a sort of ‘minister without portfolio’ sort of job, and you know, influence without formal authority, where you are responsible for a lot of things happening, but the people don’t report to you, generally.” – Tom Davenport (22:03)
    • “This collaboration between a human being making a decision and an AI system that might in some cases come up with a different decision but can’t explain itself, that’s a really tough thing to do [well].” – Tom Davenport (28:04)
    • “This idea that we’re going to use externally-sourced systems for ML is not likely to succeed in many cases because, you know, those vendors didn’t work closely with everybody in your organization” – Tom Davenport (30:21)
    • “I think it’s unlikely that [organizational gaps] are going to be successfully addressed by merging everybody together in one organization. I think that’s what product managers do is they try to address those gaps in the organization and develop a process that makes coordination at least possible, if not true, all the time.” – Tom Davenport (36:49)
    Links
    • Tom’s LinkedIn: https://www.linkedin.com/in/davenporttom/
    • Tom’s Twitter: https://twitter.com/tdav
    • All-in On AI by Thomas Davenport & Nitin Mittal, 2023
    • Working With AI by Thomas Davenport & Stephen Miller, 2022
    • Advanced Introduction to AI in Healthcare by Thomas Davenport, John Glaser, & Elizabeth Gardner, 2022
    • Competing On Analytics by Thomas Davenport & Jeanne G. Harris, 2007

    106 - Ideaflow: Applying the Practice of Design and Innovation to Internal Data Products w/ Jeremy Utley Dec 13, 2022

    Today I’m chatting with former-analyst-turned-design-educator Jeremy Utley of the Stanford d.school and co-author of Ideaflow. Jeremy reveals the psychology behind great innovation, and the importance of creating psychological safety for a team to generate what they may view as bad ideas. Jeremy speaks to the critical collision of unrelated frames of reference when problem-solving, as well as why creativity is actually more of a numbers game than awaiting that singular stroke of genius. Listen as Jeremy gives real-world examples of how to practice and measure (!) your innovation efforts and apply them to data products.

    Highlights/ Skip to:

    • Jeremy explains the methodology of thinking he’s adopted after moving from highly analytical roles to the role he’s in now (01:38)
    • The approach Jeremy takes to the existential challenge of balancing innovation with efficiency (03:54)
    • Brian shares a story of a creative breakthrough he had recently and Jeremy uses that to highlight how innovation often comes in a way contrary to normalcy and professionalism (09:37)
    • Why Jeremy feels innovation and creativity demand multiple attempts at finding solutions (16:13)
    • How to take a innovation-forward approach like the ones Jeremy has described when working on internal tool development (19:33)
    • Jeremy’s advice for accelerating working through bad ideas to get to the good ideas (25:18)
    • The approach Jeremy takes to generate a large volume of ideas, rather than focusing only on “good” ideas, including a real-life example (31:54)
    • Jeremy’s beliefs on the importance of creating psychological safety to promote innovation and creative problem-solving (35:11)
    Quotes from Today’s Episode
    • “I’m in spreadsheets every day to this day, but I recognize that there’s a time and place when that’s the tool that’s needed, and then specifically, there’s a time and a place where that’s not going to help me and the answer is not going to be found in the spreadsheet.” – Jeremy Utley (03:13)
    • “There’s the question of, ‘Are we doing it right?’ And then there’s a different question, which is, ‘Are we doing the right “it”?’ And I think a lot of us tend to fixate on, ‘Are we doing it right?’ And we have an ability to perfectly optimize that what should not be done.” – Jeremy Utley (05:05)
    • “I think a vendetta that I have is against this wrong placement of—this exaltation of efficiency is the end-all, be-all. Innovation is not efficient. And the question is not how can I be efficient. It’s what is effective. And effectiveness, oftentimes when it comes to innovation and breaking through, doesn’t feel efficient.” – Jeremy Utley (09:17)
    • “The way the brain works, we actually understand it. The way breakthroughs work we actually understand them. The difficulty is it challenges our definitions of efficiency and professionalism and all of these things.” – Jeremy Utley (15:13)
    • “What’s the a priori probability that any solution is the right solution? Or any idea is a good idea? It’s exceptionally low. You have to be exceptionally arrogant to think that most of your ideas are good. They’re not. That’s fine, we don’t mind because then what’s efficient is actually to generate a lot.” – Jeremy Utley (26:20)
    • “If you don’t learn that nothing happens when the ball hits the floor, you can never learn how to juggle. And to me, it’s a really good metaphor. The teams that don’t learn nothing happens when they have a bad idea. Literally, the world does not end. They don’t get fired. They don’t get ridiculed. Now, if they do get fired or ridiculed, that’s a leadership problem.” – Jeremy Utley (35:59)
    • [The following] is an essential question for a team leader to ask. Do people on my team have the freedom, at least with me, to share what they truly fear could be an incredibly stupid idea?” – Jeremy Utley (41:52)
    Links
    • Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters-ebook/dp/B09R6M3292
    • Ideaflow website: https://ideaflow.design
    • Personal webpage: https://jeremyutley.design
    • LinkedIn: https://www.linkedin.com/in/jeremyutley/
    • Twitter: https://twitter.com/jeremyutley/
    • Brian’s musical arrangement of Gershwin’s “Prelude for Piano IIfeaturing the Siamese Cat Song” performed by Mr. Ho’s Orchestrotica - listen on Spotify

    105 - Defining “Data Product” the Producty Way and the Non-technical Skills ML/AI Product Managers Need Nov 29, 2022

    Today I’m discussing something we’ve been talking about a lot on the podcast recently - the definition of a “data product.” While my definition is still a work in progress, I think it’s worth putting out into the world at this point to get more feedback. In addition to sharing my definition of data products (as defined the “producty way”), on today’s episode definition, I also discuss some of the non-technical skills that data product managers (DPMs) in the ML and AI space need if they want to achieve good user adoption of their solutions. I’ll also share my thoughts on whether data scientists can make good data product managers, what a DPM can do to better understand your users and stakeholders, and how product and UX design factors into this role.

    Highlights/ Skip to:

    • I introduce my reasons for sharing my definition of a data product (0:46)
    • My definition of data product (7:26)
    • Thinking the “producty” way (8:14)
    • My thoughts on necessary skills for data PMs (in particular, AI & machine learning product management) (12:21)
    • How data scientists can become good data product managers (DPMs) by taking off the data science hat (13:42)
    • Understanding the role of UX design within the context of DPM (16:37)
    • Crafting your sales and marketing strategies to emphasize the value of your product to the people who can use or purchase it (23:07)
    • How to build a team that will help you increase adoption of your data product (30:01)
    • How to build relationships with stakeholders/customers that allow you to find the right solutions for them (33:47)
    • Letting go of a technical identity to develop a new identity as a DPM who can lead a team to build a product that actually gets used (36:32)
    Quotes from Today’s Episode
    • “This is what’s missing in some of the other definitions that I see around data products [...] they’re not talking about it from the customer of the data product lens. And that orientation sums up all of the work that I’m doing and trying to get you to do as well, which is to put the people at the center of the work that you’re doing and not the data science, engineering, tech, or design. I want you to put the people at the center.” (6:12)
    • “A data product is a data-driven, end-to-end, human-in-the-loop decision support solution that’s so valuable, users would potentially pay to use it.” (7:26)
    • “I want to plunge all the way in and say, ‘if you want to do this kind of work, then you need to be thinking the product-y way.’ And this means inherently letting go of some of the data science-y way of thinking and the data-first kinds of ways of thinking.” (11:46)
    • “I’ve read in a few places that data scientists don’t make for good data product managers. [While it may be true that they’re more introverted,] I don’t think that necessarily means that there’s an inherent problem with data scientists becoming good data product managers. I think the main challenge will be—and this is the same thing for almost any career transitioning into product management—is knowing when to let go of your former identity and wear the right hat at the right time.” (14:24)
    • “Make better things for people that will improve their life and their outcomes and the business value will follow if you’ve properly aligned those two things together.” (17:21)
    • “The big message here is this: there is always a design and experience, whether it is an API, or a platform, a dashboard, a full application, etc. Since there are no null design choices, how much are you going to intentionally shape that UX, or just pray that it comes out good on the other end? Prayer is not really a reliable strategy. If you want to routinely do this work right, you need to put intention behind it.” (22:33)
    • “Relationship building is a must, and this is where applying user experience research can be very useful—not just for users, but also with stakeholders. It’s learning how to ask really good questions and learning the feelings, emotions, and reasons why people ask your team to build the thing that they’ve asked for. Learning how to dig into that is really important.” (26:26)
    Links
    • Designing for Analytics Community
    • Work With Me
    • Email
    • Record a question

    104 - Surfacing the Unarticulated Needs of Users and Stakeholders through Effective Listening Nov 15, 2022

    Today I’m chatting with Indi Young, independent qualitative data scientist and author of Time to Listen. Indi explains how it is possible to gather and analyze qualitative data in a way that is meaningful to the desired future state of your users, and that learning how to listen and not just interview users is much like learning to ride a bicycle. Listen (!) to find out why pushing back is a necessary part of the design research process, how to build an internal sensor that allows you to truly uncover the nuggets of information that are critical to your projects, and the importance of understanding thought processes to prevent harmful outcomes.

    Highlights/ Skip to:

    • Indi introduces her perspective on analyzing qualitative data sets (00:51)
    • Indi’s motivation for working in design research and the importance of being able to capture and understand patterns to prevent harmful outcomes (05:09)
    • The process Indi goes through for problem framing and understanding a user’s desired future state (11:11)
    • Indi explains how to listen effectively in order to understand the thinking style of potential end users (15:42)
    • Why Indi feels pushing back on problems within projects is a vital part of taking responsibility and her recommendations for doing so effectively (21:45)
    • The importance Indi sees in building up a sensor in order to be able to detect nuggets clients give you for their upcoming projects (28:25)
    • The difference in techniques Indi observes between an interview, a listening session, and a survey (33:13)
    • Indi describes her published books and reveals which one she’d recommend listeners start with (37:34)
    Quotes from Today’s Episode
    • “A lot of qualitative data is not trusted, mainly because the people who are doing the not trusting have encountered bad qualitative data.” — Indi Young (03:23)
    • “When you’re learning to ride a bike, when you’re learning to decide what knowledge is needed, you’re probably going to burn through a bunch of money-making knowledge that never gets used. So, that’s when you start to learn, ‘I need to frame this better, and to frame it, I can’t do it by myself.’” – Indi Young (11:57)
    • “What you want to do is get beyond the exterior and get to the interior, which is where somebody tells you what actually went through their mind when they did that thing in the past, not what’s going through their mind right now. And it’s that’s a very important distinction.” – Indi Young (20:28)
    • “Re: dealing with stakeholders: You’re not doing your job if you don’t push back. You built up a lot of experience, you got hired, they hired you and your thinking and your experience, and if what went through your mind is, like, ‘This is wrong,’ but you don’t act on it, then they should not pay you a salary.” – Indi Young (22:45)
    • “I’ve seen a lot of people leave their perfectly promising career because it was too hard to get to the point of accepting that you have to network, that I’m not going to be that one-in-a-million person who’s the brilliant person with a brilliant idea and get my just rewards that way.” – Indi Young (25:13)
    • “What’s really interesting about a listening session is that it doesn’t—aside from building this sensor and learning what the techniques are for helping a person get to their interior cognition rather than that exterior … to get past that into the inner thinking, the emotional reactions, and the guiding principles, aside from the sensor and those techniques, there’s not much to it.” – Indi Young (32:45)
    • “And once you start building that [sensor], and this idea of just having one generative question about the purpose—because the whole thing is framed by the purpose—there you go. Get started. You have to practice. So, it’s like riding a bike. Go for it. You won’t have those sensors at first, but you’ll start to learn how to build them.” – Indi Young (36:41)
    Links Referenced:
    • Time to Listen: https://www.amazon.com/Time-Listen-Invention-Inclusion-Assumptions/dp/1944627111
    • Mental Models: https://www.amazon.com/Mental-Models-Aligning-Strategy-Behavior/dp/1933820063
    • Practical Empathy: https://www.amazon.com/Practical-Empathy-Collaboration-Creativity-Your/dp/1933820489
    • indiyoung.com: https://indiyoung.com
    • LinkedIn: https://www.linkedin.com/in/indiyoung/
    • Instagram: https://www.instagram.com/indiyoung_/

    103 - Helping Pediatric Cardiac Surgeons Make Better Decisions with ML featuring Eugenio Zuccarelli of MIT Media Lab Nov 01, 2022

    Today I’m chatting with Eugenio Zuccarelli, Research Scientist at MIT Media Lab and Manager of Data Science at CVS. Eugenio explains how he has created multiple algorithms designed to help shape decisions made in life or death situations, such as pediatric cardiac surgery and during the COVID-19 pandemic. Eugenio shared the lessons he’s learned on how to build trust in data when the stakes are life and death. Listen and learn how culture can affect adoption of decision support and ML tools, the impact delivery of information has on the user's ability to understand and use data, and why Eugenio feels that design is more important than the inner workings of ML algorithms.

    Highlights/ Skip to:

    • Eugenio explains why he decided to work on machine learning models for cardiologists and healthcare workers involved in the COVID-19 pandemic (01:53)
    • The workflow surgeons would use when incorporating the predictive algorithm and application Eugenio helped develop (04:12)
    • The question Eugenio’s predictive algorithm helps surgeons answer when evaluating whether to use various pediatric cardiac surgical procedures (06:37)
    • The path Eugenio took to build trust with experienced surgeons and drive product adoption and the role of UX (09:42)
    • Eugenio’s approach to identifying key problems and finding solutions using data (14:50)
    • How Eugenio has tracked value delivery and adoption success for a tool that relies on more than just accurate data & predictions, but also surgical skill and patient case complexity (22:26)
    • The design process Eugenio started early on to optimize user experience and adoption (28:40)
    • Eugenio’s key takeaways from a different project that helped government agencies predict what resources would be needed in which areas during the COVID-19 pandemic (34:45)
    Quotes from Today’s Episode
    • “So many people today are developing machine-learning models, but I truly find the most difficult parts to be basically everything around machine learning … culture, people, stakeholders, products, and so on.” — Eugenio Zuccarelli (01:56)
    • “Developing machine-learning components, clean data, developing the machine-learning pipeline, those were the easy steps. The difficult ones who are gaining trust, as you said, developing something that was useful. And talking about trust, it’s especially tricky in the healthcare industry.” — Eugenio Zuccarelli (10:42)
    • “Because this tennis match, this ping-pong match between what can be done and what’s [the] problem [...] thankfully, we know, of course, it is not really the route to go. We don’t want to develop technology for the sake of it.” — Eugenio Zuccarelli (14:49)
    • “We put so much effort on the machine-learning side and then the user experience is so key, it’s probably even more important than the inner workings.” — Eugenio Zuccarelli (29:22)
    • “It was interesting to see exactly how the doctor is really focused on their job and doing it as well as they can, not really too interested in fancy [...] solutions, and so we were really able to not focus too much on appearance or fancy components, but more on usability and readability.” — Eugenio Zuccarelli (33:45)
    • “People’s ability to trust data, and how this varies from a lot of different entities, organizations, countries, [etc.] This really makes everything tricky. And of course, when you have a pandemic, this acts as a catalyst and enhances all of these cultural components.” — Eugenio Zuccarelli (35:59)
    • “I think [design success] boils down to delivery. You can package the same information in different ways [so that] it actually answers their questions in the ways that they’re familiar with.” — Eugenio Zuccarelli (37:42)
    Links
    • LinkedIn: https://www.linkedin.com/in/jayzuccarelli
    • Twitter: twitter.com/jayzuccarelli
    • Personal website: https://eugeniozuccarelli.com
    • Medium: jayzuccarelli.medium.com

    102 - CDO Spotlight: The Non-Technical Roles Data Science and Analytics Teams Need to Drive Adoption of Data Products w/ Iván Herrero Bartolomé Oct 18, 2022

    Today I’m chatting with Iván Herrero Bartolomé, Chief Data Officer at Grupo Intercorp. Iván describes how he was prompted to write his new article in CDO Magazine, “CDOs, Let’s Get Out of Our Comfort Zone” as he recognized the importance of driving cultural change within organizations in order to optimize the use of data. Listen in to find out how Iván is leveraging the role of the analytics translator to drive this cultural shift, as well as the challenges and benefits he sees data leaders encounter as they move from tactical to strategic objectives. Iván also reveals the number one piece of advice he’d give CDOs who are struggling with adoption.

    Highlights / Skip to:

    • Iván explains what prompted him to write his new article, “CDOs, Let’s Get Out of Our Comfort Zone” (01:08)
    • What Iván feels is necessary for data leaders to close the gap between data and the rest of the business and why (03:44)
    • Iván dives into who he feels really owns delivery of value when taking on new data science and analytics projects (09:50)
    • How Iván’s team went from managing technical projects that often didn’t make it to production to working on strategic projects that almost always make it to production (13:06)
    • The framework Iván has developed to upskill technical and business roles to be effective data / analytics translators (16:32)
    • The challenge Iván sees data leaders face as they move from setting and measuring tactical goals to moving towards strategic goals and initiatives (24:12)
    • Iván explains how the C-Suite’s attitude impacts the cross-functional role of data & analytics leadership (28:55)
    • The number one piece of advice Iván would give new CDO’s struggling with low adoption of their data products and solutions (31:45)
    Quotes from Today’s Episode
    • “We’re going to do all our best to ensure that [...] everything that is expected from us is done in the best possible way. But that’s not going to be enough. We need a sponsorship and we need someone accountable for the project and someone who will be pushing and enabling the use of the solution once we are gone. Because we cannot stay forever in every company.” – Iván Herrero Bartolomé (10:52)
    • “We are trying to upskill people from the business to become data translators, but that’s going to take time. Especially what we try to do is to take product owners and give them a high-level immersion on the state-of-the-art and the possibilities that data analytics bring to the table. But as we can’t rely on our companies having this kind of talent and these data translators, they are one of the profiles that we bring in for every project that we work on.” – Iván Herrero Bartolomé (13:51)
    • “There’s a lot to do, not just between data and analytics and the other areas of the company, but aligning the incentives of all the organization towards the same goals in a way that there’s no friction between the goals of the different areas, the people, [...] and the final goals of the organization. – Iván Herrero Bartolomé (23:13)
    • “Deciding which goals are you going to be co-responsible for, I think that is a sophisticated process that it’s not mastered by many companies nowadays. That probably is one of the main blockers keeping data analytics areas working far from their business counterparts” – Iván Herrero Bartolomé (26:05)
    • “When the C-suite looks at data and analytics, if they think these are just technical skills, then the data analytics team are just going to behave as technical people. And many, many data analytics teams are set up as part of the IT organization. So, I think it all begins somehow with how the C-suite of our companies look at us.” – Iván Herrero Bartolomé (28:55)
    • “For me, [digital] means much more than the technical development of solutions; it should also be part of the transformation of the company, both in how companies develop relationships with their customers, but also inside how every process in the companies becomes more nimble and can react faster to the changes in the market.” – Iván Herrero Bartolomé (30:49)
    • “When you feel that everyone else not doing what you think they should be doing, think twice about whether it is they who are not doing what they should be doing or if it’s something that you are not doing properly.” – Iván Herrero Bartolomé (31:45)
    Links
    • “CDOs, Let’s Get Out of Our Comfort Zone”: https://www.cdomagazine.tech/cdo_magazine/topics/opinion/cdos-lets-get-out-of-our-comfort-zone/article_dce87fce-2479-11ed-a0f4-03b95765b4dc.html
    • LinkedIn: https://www.linkedin.com/in/ivan-herrero-bartolome/

    101 - Insights on Framing IOT Solutions as Data Products and Lessons Learned from Katy Pusch Oct 04, 2022

    Today I’m chatting with Katy Pusch, Senior Director of Product and Integration for Cox2M. Katy describes the lessons she’s learned around making sure that the “juice is always worth the squeeze” for new users to adopt data solutions into their workflow. She also explains the methodologies she’d recommend to data & analytics professionals to ensure their IOT and data products are widely adopted. Listen in to find out why this former analyst turned data product leader feels it’s crucial to focus on more than just delivering data or AI solutions, and how spending more time upfront performing qualitative research on users can wind up being more efficient in the long run than jumping straight into development.

    Highlights/ Skip to:

    • What Katy does at Cox2M, and why the data product manager role is so hard to define (01:07)
    • Defining the value of the data in workflows and how that’s approached at Cox2M (03:13)
    • Who buys from Cox2M and the customer problems that Katy’s product solves (05:57)
    • How Katy approaches the zero-to-one process of taking IOT sensor data and turning it into a customer experience that provides a valuable solution (08:00)
    • What Katy feels best motivates the adoption of a new solution for users (13:21)
    • Katy describes how she spends more time upfront before development to ensure she’s solving the right problems for users (16:13)
    • Katy’s views on the importance of data science & analytics pros being able to communicate in the language of their audience (20:47)
    • The differences Katy sees between designing data products for sophisticated data users vs a broader audience (24:13)
    • The methods Katy uses to effectively perform qualitative research and her triangulation method to surface the real needs of end users (27:29)
    • Katy’s views on the most valuable skills for future data product managers (35:24)
    Quotes from Today’s Episode
    • “I’ve had the opportunity to get a little bit closer to our customers than I was in the beginning parts of my tenure here at Cox2M. And it’s just like a SaaS product in the sense that the quality of your data is still dependent on your customers’ workflows and their ability to engage in workflows that supply accurate data. And it’s been a little bit enlightening to realize that the same is true for IoT.” – Katy Pusch (02:11)
    • “Providing insights to executives that are [simply] interesting is not really very impactful. You want to provide things that are actionable and that drive the business forward.” – Katy Pusch (4:43)
    • “So, there’s one side of it, which is [the] happy path: figure out a way to embed your product in the customer’s existing workflow. That’s where the most success happens. But in the situation we find ourselves in right now with [this IoT solution], we do have to ask them to change their workflow.”-- Katy Pusch (12:46)
    • “And the way to communicate [the insight to other stakeholders] is not with being more precise with your numbers [or adding] statistics. It’s just to communicate the output of your analysis more clearly to the person who needs to be able to make a decision.” -- Katy Pusch (23:15)
    • “You have to define ‘What decision is my user making on a repeated basis that is worth building something that it does automatically?’ And so, you say, ‘What are the questions that my user needs answers to on a repeated basis?’ … At its essence, you’re answering three or four questions for that user [that] have to be the most important [...] questions for your user to add value. And that can be a difficult thing to derive with confidence.” – Katy Pusch (25:55)
    • “The piece of workflow [on the IOT side] that’s really impactful there is we’re asking for an even higher degree of change management in that case because we’re asking them to attach this device to their vehicle, and then detach it at a different point in time and there’s a procedure in the solution to allow for that, but someone at the dealership has to engage in that process. So, there’s a change management in the workflow that the juice has to be worth the squeeze to encourage a customer to embark in that journey with you.” – Katy Pusch (12:08)
    • “Finding people in your organization who have the appetite to be cross-functionally educated, particularly in a data arena, is very important [to] help close some of those communication gaps.” – Katy Pusch (37:03)

    100 - Why Your Data, AI, Product & Business Strategies Must Work Together (and Digital Transformation is The Wrong Framing) with Vin Vashishta Sep 20, 2022

    Today I’m chatting with Vin Vashishta, Founder of V Squared. Vin believes that with methodical strategic planning, companies can prepare for continuous transformation by removing the silos that exist between leadership, data, AI, and product teams. How can these barriers be overcome, and what is the impact of doing so? Vin answers those questions and more, explaining why process disruption is necessary for long-term success and gives real-world examples of companies who are adopting these strategies.

    Highlights/ Skip to:

    • What the AI ‘Last Mile’ Problem is (03:09)
    • Why Vin sees so many businesses are reevaluating their offerings and realigning with their core business model (09:01)
    • Why every company today is struggling to figure out how to bridge the gap between data, product, and business value (14:25)
    • How the skillsets needed for success are evolving for data, product, and business leaders (14:40)
    • Vin’s process when he’s helping a team with a data strategy, and what the end result looks like (21:53)
    • Why digital transformation is dead, and how to reframe what business transformation means in today’s day and age (25:03)
    • How Airbnb used data to inform their overall strategy to survive during a time of massive industry disruption, and how those strategies can be used by others as a preventative measure (29:03)
    • Unpacking how a data strategy leader can work backward from a high-level business strategy to determining actionable steps and use cases for ML and analytics (32:52)
    • Who (what roles) are ultimately responsible in an ideal strategy planning session? (34:41)
    • How the C-Suite can bridge business & data strategy and the impact the world’s largest companies are seeing as a result (36:01)
    Quotes from Today’s Episode
    • “And when you have that [core business & technology strategy] disconnect, technology goes in one direction, what the business needs and what customers need sort of lives outside of the silo.” – Vin Vashishta (06:06)
    • “Why are we doing data and not just traditional software development? Why are we doing data science and not analytics? There has to be a justification because each one of these is more expensive than the last, each one is, you know, less certain.” – Vin Vashishta (10:36)
    • “[The right people to train] are smart about the technology, but have also lived with the users, have some domain expertise, and the interest in making a bigger impact. Let’s put them in strategy roles.” – Vin Vashishta (18:58)
    • “You know, this is never going to end. Transformation is continuous. I don’t call it digital transformation anymore because that’s making you think that this thing is somehow a once-in-a-generation change. It’s not. It’s once every five years now.” – Vin Vashishta (25:03)
    • “When do you want to have those [business] opportunities done by? When do you want to have those objectives completed by? Well, then that tells you how fast you have to transform if you want to use each one of these different technologies.” – Vin Vashishta (25:37)
    • “You’ve got to disrupt the process. Strategy planning is not the same anymore. Look at how Amazon does it. ... They are destroying their competitors because their strategy planning process is both expert and data model-driven.” – Vin Vashishta (33:44)
    • “And one of the critical things for CDOs to do is tell stories with data to the board. When they sit in and talk to the board. They need to tell those stories about how one data point hit this one use case and the company made $4 million.” – Vin Vashishta (39:33)
    Links
    • HumblePod: https://humblepod.com
    • V Squared: https://datascience.vin
    • LinkedIn: https://www.linkedin.com/in/vineetvashishta/
    • Twitter: https://twitter.com/v_vashishta
    • YouTube channel: https://www.youtube.com/c/TheHighROIDataScientist
    • Substack: https://vinvashishta.substack.com/

    099 - Don’t Boil the Ocean: How to Generate Business Value Early With Your Data Products with Jon Cooke, CTO of Dataception Sep 06, 2022

    Today I’m sitting down with Jon Cooke, founder and CTO of Dataception, to learn his definition of a data product and his views on generating business value with your data products. In our conversation, Jon explains his philosophy on data products and where design and UX fit in. We also review his conceptual model for data products (which he calls the data product pyramid), and discuss how together, these concepts allow teams to ship working solutions faster that actually produce value.

    Highlights/ Skip to:

    • Jon’s definition of a data product (1:19)
    • Brian explains how UX research and design planning can and should influence data architecture —so that last mile solutions are useful and usable (9:47)
    • The four characteristics of a data product in Jon’s model (16:16)
    • The idea of products having a lifecycle with direct business/customer interaction/feedback (17:15)
    • Understanding Jon’s data product pyramid (19:30)
    • The challenges when customers/users don’t know what they want from data product teams - and who should be doing the work to surface requirements (24:44)
    • Mitigating risk and the importance of having management buy-in when adopting a product-driven approach (33:23)
    • Does the data product pyramid account for UX? (35:02)
    • What needs to change in an org model that produces data products that aren’t delivering good last mile UXs (39:20)
    Quotes from Today’s Episode
    • “A data product is something that specifically solves a business problem, a piece of analytics, data use case, a pipeline, datasets, dashboard, that type that solves a business use case, and has a customer, and as a product lifecycle to it.” - Jon (2:15)
    • “I’m a fan of any definition that includes some type of deployment and use by some human being. That’s the end of the cycle, because the idea of a product is a good that has been made, theoretically, for sale.” - Brian (5:50)
    • “We don’t build a lot of stuff around cloud anymore. We just don’t build it from scratch. It’s like, you know, we don’t generate our own electricity, we don’t mill our own flour. You know, the cloud—there’s a bunch of composable services, which I basically pull together to build my application, whatever it is. We need to apply that thinking all the way through the stack, fundamentally.” - Jon (13:06)
    • “It’s not a data science problem, it’s not a business problem, it’s not a technology problem, it’s not a data engineering problem, it’s an everyone problem. And I advocate small, multidisciplinary teams, which have a business value person in it, have an SME, have a data scientist, have a data architect, have a data engineer, as a small pod that goes in and answer those questions.” - Jon (26:28)
    • “The idea is that you’re actually building the data products, which are the back-end, but you’re actually then also doing UX alongside that, you know? You’re doing it in tandem.” - Jon (37:36)
    • “Feasibility is one of the legs of the stools. There has to be market need, and your market just may be the sales team, but there needs to be some promise of value there that this person is really responsible for at the end of the day, is this data product going to create value or not?” - Brian (42:35)
    • “The thing about data products is sometimes you don’t know how feasible it is until you actually look at the data…You’ve got to do what we call data archaeology. You got to go and find the data, you got to brush it off, and you’re looking at and go, ‘Is it complete?’” - Jon (44:02)
    Links Referenced:
    • Dataception
    • Data Product Pyramid
    • Email: joncooke@dataception.com
    • LinkedIn: https://www.linkedin.com/in/jon-cooke-096bb0/

    098 - Why Emilie Schario Wants You to Run Your Data Team Like a Product Team Aug 23, 2022

    Today I’m chatting with Emilie Shario, a Data Strategist in Residence at Amplify Partners. Emilie thinks data teams should operate like product teams. But what led her to that conclusion, and how has she put the idea into practice? Emilie answers those questions and more, delving into what kind of pushback and hiccups someone can expect when switching from being data-driven to product-driven and sharing advice for data scientists and analytics leaders.

    Highlights / Skip to:

    • Answering the question “whose job is it” (5:18)
    • Understanding and solving problems instead of just building features people ask for (9:05)
    • Emilie explains what Amplify Partners is and talks about her work experience and how it fuels her perspectives on data teams (11:04)
    • Emilie and I talk about the definition of data product (13:00)
    • Emilie talks about her approach to building and training a data team (14:40)
    • We talk about UX designers and how they fit into Emilie’s data teams (18:40)
    • Emilie talks about the book and blog “Storytelling with Data” (21:00)
    • We discuss the push back you can expect when trying to switch a team from being data driven to being product driven (23:18)
    • What hiccups can people expect when switching to a product driven model (30:36)
    • Emilie’s advice for data scientists and and analyst leaders (35:50)
    • Emilie explains what Locally Optimistic is (37:34)
    Quotes from Today’s Episode
    • “Our thesis is…we need to understand the problems we’re solving before we start building solutions, instead of just building the things people are asking for.” — Emilie (2:23)
    • “I’ve seen this approach of flipping the ask on its head—understanding the problem you’re trying to solve—work and be more successful at helping drive impact instead of just letting your data team fall into this widget builder service trap.” — Emilie (4:43)
    • “If your answer to any problem to me is, ‘That’s not my job,’ then I don’t want you working for me because that’s not what we’re here for. Your job is whatever the problem in front of you that needs to be solved.” — Emilie (7:14)
    • “I don’t care if you have all of the data in the world and the most talented machine learning engineers and you’ve got the ability to do the coolest new algorithm fancy thing. If it doesn’t drive business impact, it doesn’t matter.” — Emilie (7:52)
    • “Data is not just a thing that anyone can do. It’s not just about throwing numbers in a spreadsheet anymore. It’s about driving business impact. But part of how we drive business impact with data is making it accessible. And accessible isn’t just giving people the numbers, it’s also communicating with it effectively, and UX is a huge piece of how we do that.” — Emilie (19:57)
    • “There are no null choices in design. Someone is deciding what some other human—a customer, a client, an internal stakeholder—is going to use, whether it’s a React app, or a Power BI dashboard, or a spreadsheet dump, or whatever it is, right? There will be an experience that is created, whether it is intentionally created or not.” — Brian (20:28)
    • “People will think design is just putting in colors that match together, like, or spinning the color wheel and seeing what lands. You know, there’s so much more to it. And it is an expertise; it is a domain that you have to develop.” — Emilie (34:58)
    Links Referenced:
    • Blog post by Rifat Majumder
    • storytellingwithdata.com
    • Experiencing Data Episode 28 with Cole Nussbaumer Knaflic
    • locallyoptimistic.com
    • Twitter: @emilieschario

    097 - Why Regions Bank’s CDAO, Manav Misra, Implemented a Product-Oriented Approach to Designing Data Products Aug 09, 2022

    Today, I chat with Manav Misra, Chief Data and Analytics Officer at Regions Bank. I begin by asking Manav what it was like to come in and implement a user-focused mentality at Regions, driven by his experience in the software industry. Manav details his approach, which included developing a new data product partner role and using effective communication to gradually gain trust and cooperation from all the players on his team.

    Manav then talks about how, over time, he solidified a formal framework for his team to be trained to use this approach and how his hiring is influenced by a product orientation. We also discuss his definition of data product at Regions, which I find to be one of the best I’ve heard to date. Today, Region Bank’s data products are delivering tens of millions of dollars in additional revenue to the bank. Given those results, I also dig into the role of design and designers to better understand who is actually doing the designing of Regions’ data products to make them so successful. Later, I ask Manav what it’s like when designers and data professionals work on the same team and how UX and data visualization design are handled at the bank.

    Towards the end, Manav shares what he has learned from his time at Regions and what he would implement in a new organization if starting over. He also expounds on the importance of empowering his team to ask customers the right questions and how a true client/stakeholder partnership has led to Manav’s most successful data products.

    Highlights / Skip to:

    • Brief history of decision science and how it influenced the way data science and analytics work has been done (and unfortunately still is in many orgs) (1:47)
    • Manav’s philosophy and methods for changing the data science culture at Regions Bank to being product and user-driven (5:19)
    • Manav talks about the size of his team and the data product role within the team as well as what he had to do to convince leadership to buy in to the necessity of the data product partner role (10:54)
    • Quantifying and measuring the value of data products at Regions and some of his results (which include tens of millions of dollars in additional revenue) (13:05)
    • What’s a “data product” at Regions? Manav shares his definition (13:44)
    • Who does the designing of data products at Regions? (17:00)
    • The challenges and benefits of having a team comprised of both designers and data scientists (20:10)
    • Lessons Manav has learned from building his team and culture at Regions (23:09)
    • How Manav coaches his team and gives them the confidence to ask the right questions (27:17)
    • How true partnership has led to Manav’s most successful data products (31:46)
    Quotes from Today’s Episode
    • Re: how traditional, non-product oriented enterprises do data work: “As younger people come out of data science programs…that [old] culture is changing. The folks coming into this world now are looking to make an impact and then they want to see what this can do in the real world.” — Manav
    • On the role of the Data Product Partner: “We brought in people that had both business knowledge as well as the technical knowledge, so with a combination of both they could talk to the ‘Internal customers,’ of our data products, but they could also talk to the data scientists and our developers and communicate in both directions in order to form that bridge between the two.” — Manav
    • “There are products that are delivering tens of millions of dollars in terms of additional revenue, or stopping fraud, or any of those kinds of things that the products are designed to address, they’re delivering and over-delivering on the business cases that we created.” — Manav
    • “The way we define a data product is this: an end-to-end software solution to a problem that the business has. It leverages data and advanced analytics heavily in order to deliver that solution.” — Manav
    • “The deployment and operationalization is simply part of the solution. They are not something that we do after; they’re something that we design in from the start of the solution.” — Brian
    • “Design is a team sport. And even if you don’t have a titled designer doing the work, if someone is going to use the solution that you made, whether it’s a dashboard, or report, or an email, or notification, or an application, or whatever, there is a design, whether you put intention behind it or not.” — Brian
    • “As you look at interactive components in your data product, which are, you know, allowing people to ask questions and then get answers, you really have to think through what that interaction will look like, what’s the best way for them to get to the right answers and be able to use that in their decision-making.” — Manav
    • “I have really instilled in my team that tools will come and go, technologies will come and go, [and so] you’ll have to have that mindset of constantly learning new things, being able to adapt and take on new ideas and incorporate them in how we do things.” — Manav
    Links
    • Regions Bank: https://www.regions.com/
    • LinkedIn: https://www.linkedin.com/in/manavmisra/

    096 - Why Chad Sanderson, Head of Product for Convoy’s Data Platform, is a Champion of Data UX Jul 26, 2022

    Today I chat with Chad Sanderson, Head of Product for Convoy’s data platform. I begin by having Chad explain why he calls himself a “data UX champion” and what inspired his interest in UX. Coming from a non-UX background, Chad explains how he came to develop a strategy for addressing the UX pain points at Convoy—a digital freight network. They “use technology to make freight more efficient, reducing costs for some of the nation’s largest brands, increasing earnings for carriers, and eliminating carbon emissions from our planet.” We also get into the metrics of success that Convoy uses to measure UX and why Chad is so heavily focused on user workflow when making the platform user-centered.

    Later, Chad shares his definition of a data product, and how his experience with building software products has overlapped with data products. He also shares what he thinks is different about creating data products vs. traditional software products. Chad then explains Convoy’s approach to prototyping and the value of partnering with users in the design process. We wrap up by discussing how UX work gets accomplished on Chad’s team, given it doesn’t include any titled UX professionals.

    Highlights:

    • Chad explains how he became a data UX champion and what prompted him to care about UX (1:23)
    • Chad talks about his strategy for beginning to address the UX issues at Convoy (4:42)
    • How Convoy measures UX improvement (9:19)
    • Chad talks about troubleshooting user workflows and it’s relevance to design (15:28)
    • Chad explains what Convoy is and the makeup of his data platform team (21:00)
    • What is a data product? Chad gives his definition and the similarities and differences between building software versus data products (23:21)
    • Chad talks about using low fidelity work and prototypes to optimize solutions and resources in the long run (27:49)
    • We talk about the value of partnering with users in the design process (30:37)
    • Chad talks about the distribution of UX labor on his team (32:15)
    Quotes from Today’s Episode

    Re: user research: "The best content that you get from people is when they are really thinking about what to say next; you sort of get into a free-flowing exchange of ideas. So it’s important to find the topic where someone can just talk at length without really filtering themselves. And I find a good place to start with that is to just talk about their problems. What are the painful things that you’ve experienced in data in the last month or in the last week?" - Chad

    Re: UX research: "I often recommend asking users to show you something they were working on recently, particularly when they were having a problem accomplishing their goal. It’s a really good way to surface UX issues because the frustration is probably fresh." - Brian

    Re: user feedback, “One of the really great pieces of advice that I got is, if you’re getting a lot of negative feedback, this is actually a sign that people care. And if people care about what you’ve built, then it’s better than overbuilding from the beginning.” - Chad

    “What we found [in our research around workflow], though, sometimes counterintuitively, is that the steps that are the easiest and simplest for a customer to do that I think most people would look at and say, ‘Okay, it’s pretty low ROI to invest in some automated solution or a product in this space,’ are sometimes the most important things that you can [address in your data product] because of the impacts that it has downstream.” - Chad

    Re: user feedback, “The amazing thing about building data products, and I guess any internal products is that 100% of your customers sit ten feet away from you. [...] When you can talk to 100% of [your users], you are truly going to understand [...] every single persona. And that is tremendously effective for creating compelling narratives about why we need to build a particular thing.” - Chad

    “If we can get people to really believe that this data product is going to solve the problem, then usually, we like to turn those people into advocates and evangelists within the company, and part of their job is to go out and convince other people about why this thing can solve the problem.” - Chad

    Links:
    • Convoy: https://convoy.com/
    • Chad on LinkedIn: https://www.linkedin.com/in/chad-sanderson/
    • Chad’s Data Products newsletter: https://dataproducts.substack.com

    095 - Increasing Adoption of Data Products Through Design Training: My Interview from TDWI Munich Jul 12, 2022

    Today I am bringing you a recording of a live interview I did at the TDWI Munich conference for data leaders, and this episode is a bit unique as I’m in the “guest” seat being interviewed by the VP of TDWI Europe, Christoph Kreutz.

    Christoph wanted me to explain the new workshop I was giving later that day, which focuses on helping leaders increase user adoption of data products through design. In our chat, I explained the three main areas I pulled out of my full 4-week seminar to create this new ½-day workshop as well as the hands-on practice that participants would be engaging in. The three focal points for the workshop were: measuring usability via usability studies, identifying the unarticulated needs of stakeholders and users, and sketching in low fidelity to avoid over committing to solutions that users won’t value.

    Christoph also asks about the format of the workshop, and I explain how I believe data leaders will best learn design by doing it. As such, the new workshop was designed to use small group activities, role-playing scenarios, peer review…and minimal lecture! After discussing the differences between the abbreviated workshop and my full 4-week seminar, we talk about my consulting and training business “Designing for Analytics,” and conclude with a fun conversation about music and my other career as a professional musician.

    In a hurry? Skip to:

    • I summarize the new workshop version of “Designing Human-Centered Data Products” I was premiering at TDWI (4:18)
    • We talk about the format of my workshop (7:32)
    • Christoph and I discuss future opportunities for people to participate in this workshop (9:37)
    • I explain the format of the main 8-week seminar versus the new half-day workshop (10:14)
    • We talk about one on one coaching (12:22)
    • I discuss my background, including my formal music training and my other career as a professional musician (14:03)
    Quotes from Today’s Episode
    • “We spend a lot of time building outputs and infrastructure and pipelines and data engineering and generating stuff, but not always generating outcomes. Users only care about how does this make my life better, my job better, my job easier? How do I look better? How do I get a promotion? How do I make the company more money? Whatever those goals are. And there’s a gap there sometimes, between the things that we ship and delivering these outcomes.” (4:36)
    • “In order to run a usability study on a data product, you have to come up with some type of learning goals and some kind of scenarios that you’re going to give to a user and ask them to go show me how you would do x using the data thing that we built for you.” (5:54)
    • “The reality is most data users and stakeholders aren’t designers and they’re not thinking about the user’s workflow and how a solution fits into their job. They don’t have that context. So, how do we get the really important requirements out of a user or stakeholder’s head? I teach techniques from qualitative UX interviewing, sales, and even hostage negotiation to get unarticulated needs out of people’s head.” (6:41)
    • “How do we work in low fidelity to get data leaders on the same page with a stakeholder or a user? How do we design with users instead of for them? Because most of the time, when we communicate visually, it starts to click (or you’ll know it’s not clicking!)” (7:05)
    • “There’s no right or wrong [in the workshop]. [The workshop] is really about the practice of using these design methods and not the final output that comes out of the end of it.” (8:14)
    • “You learn design by doing design so I really like to get data people going by trying it instead of talking about trying it. More design doing and less design thinking!” (8:40)
    • “The tricky thing [for most of my training clients], [and perhaps this is true with any type of adult education] is, ‘Yeah, I get the concept of what Brian’s talking about, but, how do I apply these design techniques to my situation? I work in this really weird domain, or on this particularly hard data space.’ Working on an exercise or real project, together, in small groups, is how I like start to make the conceptual idea of design into a tangible tool for data leaders..” (12:26)
    Links
    • Brian’s training seminar

    094 - The Multi-Million Dollar Impact of Data Product Management and UX with Vijay Yadav of Merck Jun 28, 2022

    Today I sit down with Vijay Yadav, head of the data science team at Merck Manufacturing Division. Vijay begins by relating his own path to adopting a data product and UX-driven approach to applied data science, andour chat quickly turns to the ever-present challenge of user adoption. Vijay discusses his process of designing data products with customers, as well as the impact that building user trust has on delivering business value. We go on to talk about what metrics can be used to quantify adoption and downstream value, and then Vijay discusses the financial impact he has seen at Merck using this user-oriented perspective. While we didn’t see eye to eye on everything, Vijay was able to show how focusing on the last mile UX has had a multi-million dollar impact on Merck. The conversation concludes with Vijay’s words of advice for other data science directors looking to get started with a design and user-centered approach to building data products that achieve adoption and have measurable impact.

    In our chat, we covered Vijay’s design process, metrics, business value, and more:

    • Vijay shares how he came to approach data science with a data product management approach and how UX fits in (1:52)
    • We discuss overcoming the challenge of user adoption by understanding user thinking and behavior (6:00)
    • We talk about the potential problems and solutions when users self-diagnose their technology needs (10:23)
    • Vijay delves into what his process of designing with a customer looks like (17:36)
    • We discuss the impact “solving on the human level” has on delivering real world benefits and building user trust (21:57)
    • Vijay talks about measuring user adoption and quantifying downstream value—and Brian discusses his concerns about tool usage metrics as means of doing this (25:35)
    • Brian and Vijay discuss the multi-million dollar financial and business impact Vijay has seen at Merck using a more UX driven approach to data product development (31:45)
    • Vijay shares insight on what steps a head of data science might wish to take to get started implementing a data product and UX approach to creating ML and analytics applications that actually get used (36:46)
    Quotes from Today’s Episode
    • “They will adopt your solution if you are giving them everything they need so they don’t have to go look for a workaround.” - Vijay (4:22)
    • “It’s really important that you not only capture the requirements, you capture the thinking of the user, how the user will behave if they see a certain way, how they will navigate, things of that nature.” - Vijay (7:48)
    • “When you’re developing a data product, you want to be making sure that you’re taking the holistic view of the problem that can be solved, and the different group of people that we need to address. And, you engage them, right?” - Vijay (8:52)
    • “When you’re designing in low fidelity, it allows you to design with users because you don’t spend all this time building the wrong thing upfront, at which point it’s really expensive in time and money to go and change it.” - Brian (17:11)
    • "People are the ones who make things happen, right? You have all the technology, everything else looks good, you have the data, but the people are the ones who are going to make things happen.” - Vijay (38:47)
    • “You want to make sure that you [have] a strong team and motivated team to deliver. And the human spirit is something, you cannot believe how stretchable it is. If the people are motivated, [and even if] you have less resources and less technology, they will still achieve [your goals].” - Vijay (42:41)
    • “You’re trying to minimize any type of imposition on [the user], and make it obvious why your data product is better—without disruption. That’s really the key to the adoption piece: showing how it is going to be better for them in a way they can feel and perceive. Because if they don’t feel it, then it’s just another hoop to jump through, right?” - Brian (43:56)
    Resources and Links:

    LinkedIn: https://www.linkedin.com/in/vijyadav/


    093 - Why Agile Alone Won’t Increase Adoption of Your Enterprise Data Products Jun 14, 2022

    Episode Description

    In one of my past memos to my list subscribers, I addressed some questions about agile and data products. Today, I expound on each of these and share some observations from my consulting work. In some enterprise orgs, mostly outside of the software industry, agile is still new and perceived as a panacea. In reality, it can just become a factory for shipping features and outputs faster–with positive outcomes and business value being mostly absent. To increase the adoption of enterprise data products that have humans in the loop, it’s great to have agility in mind, but poor technology shipped faster isn’t going to serve your customers any better than what you’re doing now.

    Here are the 10 reflections I’ll dive into on this episode:

    1. You can't project manage your way out of a [data] product problem.
    2. The more you try to deploy agile at scale, take the trainings, and hire special "agilists", the more you're going to tend to measure success by how well you followed the Agile process.
    3. Agile is great for software engineering, but nobody really wants "software engineering" given to them. They do care about the perceived reality of your data product.
    4. Run from anyone that tells you that you shouldn't ever do any design, user research, or UX work "up front" because "that is waterfall."
    5. Everybody else is also doing modified scrum (or modified _______).
    6. Marty Cagan talks about this a lot, but in short: while the PM (product managers) may own the backlog and priorities, what’s more important is that these PMs “own the problem” space as opposed to owning features or being solution-centered.
    7. Before Agile can thrive, you will need strong senior leadership buy-in if you're going to do outcome-driven data product work.
    8. There's a huge promise in the word "agile." You've been warned.
    9. If you don't have a plan for how you'll do discovery work, defining clear problem sets and success metrics, and understanding customers feelings, pains, needs, and wants, and the like, Agile won't deliver much improvement for data products (probably).
    10. Getting comfortable with shipping half-right, half-quality, half-done is hard.
    Quotes from Today’s Episode
    • “You can get lost in following the process and thinking that as long as we do that, we’re going to end up with a great data product at the end.” - Brian (3:16)
    • “The other way to define clear success criteria for data products and hold yourself accountable to those on the user and business side is to really understand what does a positive outcome look like? How would we measure it?” - Brian (5:26)
    • “The most important thing is to know that the user experience is the perceived reality of the technology that you built. Their experience is the only reality that matters.” - Brian (9:22)
    • “Do the right amount of planning work upfront, have a strategy in place, make sure the team understands it collectively, and then you can do the engineering using agile.” - Brian (18:15)
    • “If you don’t have a plan for how you’ll do discovery work, defining clear problem sets and success metrics, and understanding customers’ feelings, pains, needs, wants, and all of that, then agile will not deliver increased adoption of your data products. - Brian (36:07)
    Links:
    • designingforanalytics.com: https://designingforanalytics.com
    • designingforanalytics.com/list: https://designingforanalytics.com/list

    092 - How to measure data product value from a UX and business lens (and how not to do it) May 31, 2022

    Today I’m talking about how to measure data product value from a user experience and business lens, and where leaders sometimes get it wrong. Today’s first question was asked at my recent talk at the Data Summit conference where an attendee asked how UX design fits into agile data product development. Additionally, I recently had a subscriber to my Insights mailing list ask about how to measure adoption, utilization, and satisfaction of data products. So, we’ll jump into that juicy topic as well. Answering these inquiries also got me on a related tangent about the UX challenges associated with abstracting your platform to support multiple, but often theoretical, user needs—and the importance of collaboration to ensure your whole team is operating from the same set of assumptions or definitions about success. I conclude the episode with the concept of “game framing” as a way to conceptualize these ideas at a high level.

    Key topics and cues in this episode include:

    • An overview of the questions I received (:45)
    • Measuring change once you’ve established a benchmark (7:45)
    • The challenges of working in abstractions (abstracting your platform to facilitate theoretical future user needs) (10:48)
    • The value of having shared definitions and understanding the needs of different stakeholders/users/customers (14:36)
    • The importance of starting from the “last mile” (19:59)
    • The difference between success metrics and progress metrics (24:31)
    • How measuring feelings can be critical to measuring success (29:27)
    • “Game framing” as a way to understand tracking progress and success (31:22)
    Quotes from Today’s Episode
    • “Once you’ve got your benchmark in place for a data product, it’s going to be much easier to measure what the change is because you’ll know where you’re starting from.” - Brian (7:45)
    • “When you’re deploying technology that’s supposed to improve people’s lives so that you can get some promise of business value downstream, this is not a generic exercise. You have to go out and do the work to understand the status quo and what the pain is right now from the user's perspective.” - Brian (8:46)
    • “That user perspective—perception even—is all that matters if you want to get to business value. The user experience is the perceived quality, usability, and utility of the data product.” - Brian (13:07)
    • “A data product leader’s job should be to own the problem and not just the delivery of data product features, applications or technology outputs. ” - Brian (26:13)
    • “What are we keeping score of? Different stakeholders are playing different games so it’s really important for the data product team not to impose their scoring system (definition of success) onto the customers, or the users, or the stakeholders.” - Brian (32:05)
    • “We always want to abstract once we have a really good understanding of what people do, as it’s easier to create more user-centered abstractions that will actually answer real data questions later on. ” - Brian (33:34)
    Links
    • https://designingforanalytics.com/community

    091 - How Brazil’s Biggest Fiber Company, Oi, Leverages Design To Create Useful Data Products with Sr. Exec. Design Manager, João Critis May 17, 2022

    Today I talked with João Critis from Oi. Oi is a Brazilian telecommunications company that is a pioneer in convergent broadband services, pay TV, and local and long-distance voice transmission. They operate the largest fiber optics network in Brazil which reaches remote areas to promote digital inclusion of the population. João manages a design team at Oi that is responsible for the front end of data products including dashboards, reports, and all things data visualization. We begin by discussing João’s role leading a team of data designers. João then explains what data products actually are, and who makes up his team’s users and customers. João goes on to discuss user adoption challenges at Oi and the methods they use to uncover what users need in the last mile. He then explains the specific challenges his team has faced, particularly with middle management, and how his team builds credibility with senior leadership. In conclusion, João reflects on the value of empathy in the design process.

    In this episode, João shares:

    • A data product (4:48)
    • The research process used by his data teams to build journey maps for clients (7:31)
    • User adoption challenges for Oi (15:27)
    • His answer to the question “how do you decide which mouths to feed?” (16:56)
    • The unique challenges of middle management in delivering useful data products (20:33)
    • The importance of empathy in innovation (25:23)
    • What data scientists need to learn about design and vice versa (27:55)

    Quotes from Today’s Episode

    • “We put the final user in the center of our process. We [conduct] workshops involving co-creation and prototyping, and we test how people work with data.” - João (8:22)
    • "My first responsibility here is value generation. So, if you have to take two or three steps back, another brainstorm, rethink, and rebuild something that works…. [well], this is very common for us.” - João (19:28)
    • “If you don’t make an impact on the individuals, you’re not going to make an impact on the business. Because as you said, if they don’t use any of the outputs we make, then they really aren’t solutions and no value is created. - Brian (25:07)
    • “It’s really important to do what we call primary research where you’re directly interfacing as much as possible with the horse’s mouth, no third parties, no second parties. You’ve really got to develop that empathy.” - Brian (25:23)
    • “When we are designing some system or screen or other digital artifact, [we have to understand] this is not only digital, but a product. We have to understand people, how people interact with systems, with computers, and how people interact with visual presentations.” - João (28:16)
    Links
    • Oi: https://www.oi.com.br/
    • LinkedIn: https://www.linkedin.com/in/critis/
    • Instagram: https://www.instagram.com/critis/

    090 - Michelle Carney’s Mission With MLUX: Bringing UX and Machine Learning Together May 03, 2022

    Michelle Carney began her career in the worlds of neuroscience and machine learning where she worked on the original Python Notebooks. As she fine-tuned ML models and started to notice discrepancies in the human experience of using these models, her interest turned towards UX. Michelle discusses how her work today as a UX researcher at Google impacts her work with teams leveraging ML in their applications. She explains how her interest in the crossover of ML and UX led her to start MLUX, a collection of meet-up events where professionals from both data science and design can connect and share methods and ideas. MLUX now hosts meet-ups in several locations as well as virtually. Our conversation begins with Michelle’s explanation of how she teaches data scientists to integrate UX into the development of their products. As a teacher, Michelle utilizes the IDEO Design Kit with her students at the Stanford School of Design (d.school). In her teaching she shares some of the unlearning that data scientists need to do when trying to approach their work with a UX perspective in her course, Designing Machine Learning. Finally, we also discussed what UX designers need to know about designing for ML/AI. Michelle also talks about how model interpretability is a facet of UX design and why model accuracy isn’t always the most important element of a ML application. Michelle ends the conversation with an emphasis on the need for more interdisciplinary voices in the fields of ML and AI.

    Skip to a topic here:

    • Michelle talks about what drove her career shift from machine learning and neuroscience to user experience (1:15)
    • Michelle explains what MLUX is (4:40)
    • How to get ML teams on board with the importance of user experience (6:54)
    • Michelle discusses the “unlearning” data scientists might have to do as they reconsider ML from a UX perspective (9:15)
    • Brian and Michelle talk about the importance of considering the UX from the beginning of model development (10:45)
    • Michelle expounds on different ways to measure the effectiveness of user experience (15:10)
    • Brian and Michelle talk about what is driving the increase in the need for designers on ML teams (19:59)
    • Michelle explains the role of design around model interpretability and explainability (24:44)
    Quotes from Today’s Episode
    • “The first step to business value is the hurdle of adoption. A user has to be willing to try—and care—before you ever will get to business value.” - Brian O’Neill (13:01)
    • “There’s so much talk about business value and there’s very little talk about adoption. I think providing value to the end-user is the gateway to getting any business value. If you’re building anything that has a human in the loop that’s not fully automated, you can’t get to business value if you don’t get through the first gate of adoption.” - Brian O’Neill (13:17)
    • “I think that designers who are able to design for ambiguity are going to be the ones that tackle a lot of this AI and ML stuff.” - Michelle Carney (19:43)
    • “That’s something that we have to think about with our ML models. We’re coming into this user’s life where there’s a lot of other things going on and our model is not their top priority, so we should design it so that it fits into their ecosystem.” - Michelle Carney (3:27)
    • “If we aren’t thinking about privacy and ethics and explainability and usability from the beginning, then it’s not going to be embedded into our products. If we just treat usability of our ML models as a checkbox, then it just plays the role of a compliance function.” - Michelle Carney (11:52)
    • “I don’t think you need to know ML or machine learning in order to design for ML and machine learning. You don’t need to understand how to build a model, you need to understand what the model does. You need to understand what the inputs and the outputs are.” - Michelle Carney (18:45)
    Links
    • Twitter @mluxmeetup: https://twitter.com/mluxmeetup
    • MLUX LinkedIn: https://www.linkedin.com/company/mlux/
    • MLUX YouTube channel: https://bit.ly/mluxyoutube
    • Twitter @michelleRcarney: https://twitter.com/michelleRcarney
    • IDEO Design Kit - https://tinyurl.com/2p984znh

    089 - Reader Questions Answered about Dashboard UX Design Apr 19, 2022

    Dashboards are at the forefront of today’s episode, and so I will be responding to some reader questions who wrote in to one of my weekly mailing list missives about this topic. I’ve not talked much about dashboards despite their frequent appearance in data product UIs, and in this episode, I’ll explain why. Here are some of the key points and the original questions asked in this episode:

    • My introduction to dashboards (00:00)
    • Some overall thoughts on dashboards (02:50)
    • What the risk is to the user if the insights are wrong or misinterpreted (4:56)
    • Your data outputs create an experience, whether intentional or not (07:13)
    • John asks: How do we figure out exactly what the jobs are that the dashboard user is trying to do? Are they building next year's budget or looking for broken widgets? What does this user value today? Is a low resource utilization percentage something to be celebrated or avoided for this dashboard user today? (13:05)
    • Value is not intrinsically in the dashboard (18:47)
    • Mareike asks: How do we provide Information in a way that people are able to act upon the presented Information? How do we translate the presented Information into action? What can we learn about user expectation management when designing dashboard/analytics solutions? (22:00)
    • The change towards predictive and prescriptive analytics (24:30)
    • The upfront work that needs to get done before the technology is in front of the user (30:20)
    • James asks: How can we get people to focus less on the assumption-laden and often restrictive term "dashboard", and instead worry about designing solutions focused on outcomes for particular personas and workflows that happen to have some or all of the typical ingredients associated with the catch-all term "dashboards?” (33:30)
    • Stop measuring the creation of outputs and focus on the user workflows and the jobs to be done (37:00)
    • The data product manager shouldn’t just be focused on deliverables (42:28)
    Quotes from Today’s Episode
    • “The term dashboards is almost meaningless today, it seems to mean almost any home default screen in a data product. It also can just mean a report. For others, it means an entire monitoring tool, for some, it means the summary of a bunch of data that lives in some other reports. The terms are all over the place.”- Brian (@rhythmspice) (01:36)
    • “The big idea here that I really want leaders to be thinking about here is you need to get your teams focused on workflows—sometimes called jobs to be done—and the downstream decisions that users want to make with machine-learning or analytical insights. ” - Brian (@rhythmspice) (06:12)
    • “This idea of human-centered design and user experience is really about trying to fit the technology into their world, from their perspective as opposed to building something in isolation where we then try to get them to adopt our thing. This may be out of phase with the way people like to do their work and may lead to a much higher barrier to adoption.” - Brian (@rhythmspice) (14:30)
    • “Leaders who want their data science and analytics efforts to show value really need to understand that value is not intrinsically in the dashboard or the model or the engineering or the analysis.” - Brian (@rhythmspice) (18:45)
    • “There's a whole bunch of plumbing that needs to be done, and it’s really difficult. The tool that we end up generating in those situations tends to be a tool that’s modeled around the data and not modeled around [the customers] mental model of this space, the customer purchase space, the marketing spend space, the sales conversion, or propensity-to-buy space.” - Brian (@rhythmspice) (27:48)
    • “Data product managers should be these problem owners, if there has to be a single entity for this. When we’re talking about different initiatives in the enterprise or for a commercial software company, it’s really sits at this product management function.” - Brian (@rhythmspice) (34:42)
    • “It’s really important that [data product managers] are not just focused on deliverables; they need to really be the ones that summarize the problem space for the entire team, and help define a strategy with the entire team that clarifies the direction the team is going in. They are not a project manager; they are someone responsible for delivering value.” - Brian (@rhythmspice) (42:23)

    Links Referenced:

    • Mailing List: https://designingforanalytics.com/list
    • CED UX Framework for Advanced Analytics:
      • Original Article: https://designingforanalytics.com/ced
      • Podcast/Audio Episode: https://designingforanalytics.com/resources/episodes/086-ced-my-ux-framework-for-designing-analytics-tools-that-drive-decision-making/
    • My LinkedIn Live about Measuring the Usability of Data Products: https://www.linkedin.com/video/event/urn:li:ugcPost:6911800738209800192/
    • Work With Me / My Services: https://designingforanalytics.com/services

    088 - Doing UX Research for Data Products and The Magic of Qualitative User Feedback with Mike Oren, Head of Design Research at Klaviyo Apr 05, 2022

    Mike Oren, Head of Design Research at Klaviyo, joins today’s episode to discuss how we do UX research for data products—and why qualitative research matters. Mike and I recently met in Lou Rosenfeld’s Quant vs. Qual group, which is for people interested in both qualitative and quantitative methods for conducting user research. Mike goes into the details on how Klaviyo and his teams are identifying what customers need through research, how they use data to get to that point, what data scientists and non-UX professionals need to know about conducting UX research, and some tips for getting started quickly. He also explains how Klaviyo’s data scientists—not just the UX team—are directly involved in talking to users to develop an understanding of their problem space.

    Klaviyo is a communications platform that allows customers to personalize email and text messages powered by data. In this episode, Mike talks about how to ask research questions to get at what customers actually need. Mikes also offers some excellent “getting started” techniques for conducting interviews (qualitative research), the kinds of things to be aware of and avoid when interviewing users, and some examples of the types of findings you might learn. He also gives us some examples of how these research insights become features or solutions in the product, and how they interpret whether their design choices are actually useful and usable once a customer interacts with them. I really enjoyed Mike’s take on designing data-driven solutions, his ideas on data literacy (for both designers, and users), and hearing about the types of dinner conversations he has with his wife who is an economist ;-) . Check out our conversation for Mike’s take on the relevance of research for data products and user experience.

    In this episode, we cover:

    • Using “small data” such as qualitative user feedback to improve UX and data products—and the #1 way qualitative data beats quantitative data (01:45)
    • Mike explains what Klaviyo is, and gives an example of how they use qualitative information to inform the design of this communications product (03:38)
    • Mike discusses Klaviyo data scientists doing research and their methods for conducting research with their customers (09:45)
    • Mike’s tips on what to avoid when you’re conducting research so you get objective, useful feedback on your data product (12:45)
    • Why dashboards are Mike’s pet peeve (17:45)
    • Mike’s thoughts about data illiteracy, how much design needs to accommodate it, and how design can help with it (22:36)
    • How Mike conveys the research to other teams that help mitigate risk (32:00)
    • Life with an economist! (36:00)
    • What the UX and design community needs to know about data (38:30)
    Quotes from Today’s Episode
    • “I actually tell my team never to do any qualitative research around preferences…Preferences are usually something that you’re not going to get a reliable enough sample from if you’re just getting it qualitatively, just because preferences do tend to vary a lot from individual to individual; there’s lots of other factors. ”- Mike (@mikeoren) (03:05)
    • “[Discussing a product design choice influenced by research findings]: Three options gave [the customers a] feeling of more control. In terms of what actual options they wanted, two options was really the most practical, but the thing was that we weren’t really answering the main question that they had, which was what was going to happen with their data if they restarted the test with a new algorithm that was being used. That was something that we wouldn’t have been able to identify if we were only looking at the quantitative data if we were only serving them; we had to get them to voice through their concerns about it.” - Mike (@mikeoren) (07:00)
    • “When people create dashboards, they stick everything on there. If a stakeholder within the organization asked for a piece of data, that goes on the dashboard. If one time a piece of information was needed with other pieces of information that are already on the dashboard, that now gets added to the dashboard. And so you end up with dashboards that just have all these different things on them…you no longer have a clear line of signal.” - Mike (@mikeoren) (17:50)
    • “Part of the experience we need to talk about when we talk about experiencing data is that the experience can happen in more additional vehicles besides a dashboard: A text message, an email notification, there’s other ways to experience the effects of good, intelligent data product work. Pushing the right information at the right time instead of all the information all the time.” - Brian (@rhythmspice) (20:00)
    • “[Data illiteracy is] everyone’s problem. Depending upon what type of data we’re talking about, and what that product is doing, if an organization is truly trying to make data-driven decisions, but then they haven’t trained their leaders to understand the data in the right way, then they’re not actually making data-driven decisions; they’re really making instinctual decisions, or they’re pretending that they’re using the data.” - Mike (@mikeoren)(23:50)
    • “Sometimes statistical significance doesn’t matter to your end-users. More often than not organizations aren’t looking for 95% significance. Usually, 80% is actually good enough for most business decisions. Depending upon the cost of getting a high level of confidence, they might not even really value that additional 15% significance.” - Mike (@mikeoren) (31:06)
    • “In order to effectively make software easier for people to use, to make it useful to people, [designers have] to learn a minimum amount about that medium in order to start crafting those different pieces of the experience that we’re preparing to provide value to people. We’re running into the same thing with data applications where it’s not enough to just know that numbers exist and those are a thing, or to know some graphic primitives of line charts, bar charts, et cetera. As a designer, we have to understand that medium well enough that we can have a conversation with our partners on the data science team.” - Mike (@mikeoren) (39:30)

    087 - How Data Product Management and UX Integrate with Data Scientists at Albertsons Companies to Improve the Grocery Shopping Experience Mar 22, 2022

    For Danielle Crop, the Chief Data Officer of Albertsons, to draw distinctions between “digital” and “data” only limits the ability of an organization to create useful products. One of the reasons I asked Danielle on the show is due to her background as a CDO and former SVP of digital at AMEX, where she also managed product and design groups. My theory is that data leaders who have been exposed to the worlds of software product and UX design are prone to approach their data product work differently, and so that’s what we dug into this episode. It didn’t take long for Danielle to share how she pushes her data science team to collaborate with business product managers for a “cross-functional, collaborative” end result. This also means getting the team to understand what their models are personalizing, and how customers experience the data products they use. In short, for her, it is about getting the data team to focus on “outcomes” vs “outputs.” Scaling some of the data science and ML modeling work at Albertsons is a big challenge, and we talked about one of the big use cases she is trying to enable for customers, as well as one “real-life” non-digital experience that her team’s data science efforts are behind. The big takeaway for me here was hearing how a CDO like Danielle is really putting customer experience and the company’s brand at the center of their data product work, as opposed solely focusing on ML model development, dashboard/BI creation, and seeing data as a raw ingredient that lives in a vacuum isolated from people.

    In this episode, we cover:

    • Danielle’s take on the “D” in CDO: is the distinction between “digital” and “data” even relevant, especially for a food and drug retailer? (01:25)
    • The role of data product management and design in her org and how UX (i.e. shopper experience) is influenced by and considered in her team’s data science work (06:05)
    • How Danielle’s team thinks about “customers” particularly in the context of internal stakeholders vs. grocery shoppers (10:20)
    • Danielle’s current and future plans for bringing her data team into stores to better understand shoppers and customers (11:11)
    • How Danielle’s data team works with the digital shopper experience team (12:02)
    • “Outputs” versus “Outcomes” for product managers, data science teams, and data products (16:30)
    • Building customer loyalty, in-store personalization, and long term brand interaction with data science at Albertsons (20:40)
    • How Danielle and her team at Albertsons measure the success of their data products (24:04)
    • Finding the problems, building the solutions, and connecting the data to the non-technical side of the company (29:11)
    Quotes from Today’s Episode
    • “Data always comes from somewhere, right? It always has a source. And in our modern world, most of that source is some sort of digital software. So, to distinguish your data from its source is not very smart as a data scientist. You need to understand your data very well, where it came from, how it was developed, and software is a massive source of data. [As a CDO], I think it’s not important to distinguish between [data and digital]. It is important to distinguish between roles and responsibilities, you need different skills for these different areas, but to create an artificial silo between them doesn’t make a whole lot of sense to me.”- Danielle (03:00)
    • “Product managers need to understand what the customer wants, what the business needs, how to pass that along to data scientists and data scientists, and to understand how that’s affecting business outcomes. That’s how I see this all working. And it depends on what type of models they’re customizing and building, right? Are they building personalization models that are going to be a digital asset? Are they building automation models that will go directly to some sort of operational activity in the store? What are they trying to solve?” - Danielle (06:30)
    • “In a company that sells products—groceries—to individuals, personalization is a huge opportunity. How do we make that experience, both in-digital and in-store, more relevant to the customer, more sticky and build loyalty with those customers? That’s the core problem, but underneath that is you got to build a lot of models that help personalize that experience. When you start talking about building a lot of different models, you need scale.” - Danielle (9:24)
    • “[Customer interaction in the store] is a true big data problem, right, because you need to use the WiFi devices, et cetera. that you have in store that are pinging the devices at all times, and it’s a massive amount of data. Trying to weed through that and find the important signals that help us to actually drive that type of personalized experience is challenging. No one’s gotten there yet. I hope that we’ll be the first.” - Danielle (19:50)
    • “I can imagine a checkout clerk who doesn’t want to talk to the customer, despite a data-driven suggestion appearing on the clerk’s monitor as to how to personalize a given customer interaction. The recommendation suggested to the clerk may be ‘accurate from a data science point of view, but if the clerk doesn’t actually act on it, then the data product didn’t provide any value. When I train people in my seminar, I try to get them thinking about that last mile. It may not be data science work, and maybe you have a big enough org where that clerk/customer experience is someone else’s responsibility, but being aware that this is a fault point and having a cross-team perspective is key.” - Brian @rhythmspice (24:50)
    • “We’re going through a moment in time in which trust in data is shaky. What I’d like people to understand and know on a broader philosophical level, is that in order to be able to understand data and use it to make decisions, you have to know its source. You have to understand its source. You have to understand the incentives around that source of data….you have to look at the data from the perspective of what it means and what the incentives were for creating it, and then analyze it, and then give an output. And fortunately, most statisticians, most data scientists, most people in most fields that I know, are incredibly motivated to be ethical and accurate in the information that they’re putting out.” - Danielle (34:15)

    086 - CED: My UX Framework for Designing Analytics Tools That Drive Decision Making Mar 08, 2022

    Today, I’m flying solo in order to introduce you to CED: my three-part UX framework for designing your ML / predictive / prescriptive analytics UI around trust, engagement, and indispensability. Why this, why now? I have had several people tell me that this has been incredibly helpful to them in designing useful, usable analytics tools and decision support applications.

    I have written about the CED framework before at the following link:

    https://designingforanalytics.com/ced

    There you will find an example of the framework put into a real-world context. In this episode, I wanted to add some extra color to what is discussed in the article. If you’re an individual contributor, the best part is that you don’t have to be a professional designer to begin applying this to your own data products. And for leaders of teams, you can use the ideas in CED as a “checklist” when trying to audit your team’s solutions in the design phase—before it’s too late or expensive to make meaningful changes to the solutions.

    CED is definitely easier to implement if you understand the basics of human-centered design, including research, problem finding and definition, journey mapping, consulting, and facilitation etc. If you need a step-by-step method to develop these foundational skills, my training program, Designing Human-Centered Data Products, might help. It comes in two formats: a Self-Guided Video Course and a bi-annual Instructor-Led Seminar.

    Quotes from Today’s Episode
    • “‘How do we visualize the data?’ is the wrong starting question for designing a useful decision support application. That makes all kinds of assumptions that we have the right information, that we know what the users' goals and downstream decisions are, and we know how our solution will make a positive change in the customer or users’ life.”- Brian (@rhythmspice) (02:07)
    • “The CED is a UX framework for designing analytics tools that drive decision-making. Three letters, three parts: Conclusions; C, Evidence: E, and Data: D. The tough pill for some technical leaders to swallow is that the application, tool or product they are making may need to present what I call a ‘conclusion’—or if you prefer, an ‘opinion.’ Why? Because many users do not want an ‘exploratory’ tool—even when they say they do. They often need an insight to start with, before exploration time becomes valuable.” - Brian (@rhythmspice) (04:00)
    • “CED requires you to do customer and user research to understand what the meaningful changes, insights, and things that people want or need actually are. Well designed ‘Conclusions’—when experienced in an analytics tool using the CED framework—often manifest themselves as insights such as unexpected changes, confirmation of expected changes, meaningful change versus meaningful benchmarks, scoring how KPIs track to predefined and meaningful ranges, actionable recommendations, and next best actions. Sometimes these Conclusions are best experienced as charts and visualizations, but not always—and this is why visualizing the data rarely is the right place to begin designing the UX.” - Brian (@rhythmspice) (08:54)
    • “If I see another analytics tool that promises ‘actionable insights’ but is primarily experienced as a collection of gigantic data tables with 10, 20, or 30+ columns of data to parse, your design is almost certainly going to frustrate, if not alienate, your users. Not because all table UIs are bad, but because you’ve put a gigantic tool-time tax on the user, forcing them to derive what the meaningful conclusions should be.” - Brian (@rhythmspice) (20:20)

    085 - Dr. William D. Báez on the Journey and ROI of Integrating UX Design into Machine Learning and Analytics Solutions Feb 22, 2022

    Why design matters in data products is a question that, at first glance, may not be easily answered for some until they see users try to use ML models and analytics to make decisions. For Bill Báez, a data scientist and VP of Strategy at Ascend Innovations, realizing that design and UX matters in this context was a realization that grew over the course of a few years. Bill’s origins in the Air Force, and his transition to Ascend Innovations, instilled lessons about the importance of using design thinking with both clients and users.

    After observing solutions built in total isolation with zero empathy and knowledge of how they were being perceived in the wild, Bill realized the critical need to bring developers “upstairs” to actually observe the people using the solutions that were being built.

    Currently, Ascend Innovation’s consulting is primarily rooted in healthcare and community services, and in this episode, Bill provides some real-world examples where their machine learning and analytics solutions were informed by approaching the problems from a human-centered design perspective. Bill also dives in to where he is on his journey to integrate his UX and data science teams at Ascend so they can create better value for their clients and their client’s constituents.

    Highlights in this episode include:

    • What caused Bill to notice design for the first time and its importance in data products (03:12)
    • Bridging the gap between data science, UX, and the client’s needs at Ascend (08:07)
    • How to deal with the “presenting problem” and working with feedback (16:00)
    • Bill’s advice for getting designers, UX, and clients on the same page based on his experience to date (23:56)
    • How Bill provides unity for his UX and data science teams (32:40)
    • The effects of UX in medicine (41:00)
    Quotes from Today’s Episode
    • “My journey into Design Thinking started in earnest when I started at Ascend, but I didn’t really have the terminology to use. For example, Design Thinking and UX were actually terms I was not personally aware of until last summer. But now that I know and have been exposed to it and have learned more about it, I realize I’ve been doing a lot of that type of work in earnest since 2018. - Bill (03:37)
    • “Ascend Innovations has always been product-focused, although again, services is our main line of business. As we started hiring a more dedicated UX team, people who’ve been doing this for their whole career, it really helped me to understand what I had experienced prior to coming to Ascend. Part of the time I was here at Ascend that UX framework and that Design Thinking lens, it really brings a lot more firepower to what data science is trying to achieve at the end of the day.” - Bill (08:29)
    • “Clients were surprised that we were asking such rudimentary questions. They’ll say ‘Well, we’ve already talked about that,’ or, ‘It should be obvious.’ or ‘Well, why are you asking me such a simple question?’ And we had to explain to them that we wanted to start at the bottom to move to the top. We don’t want to start somewhere midway and get the top. We want to make sure that we are all in alignment with what we’re trying to do, so we want to establish that baseline of understanding. So, we’re going to start off asking very simple questions and work our way up from there...” - Bill (21:09)
    • “We’re building a thing, but the thing only has value if it creates a change in the world. The world being, in the mind of the stakeholder, in the minds of the users, maybe some third parties that are affected by that stuff, but it’s the change that matters. So what is the better state we want in the future for our client or for our customers and users? That’s the thing we’re trying to create. Not the thing; the change from the thing is what we want, and getting to that is the hard part.” - Brian (@rhythmspice) (26:33)
    • “This is a gift that you’re giving to [stakeholders] to save time, to save money, to avoid building something that will never get used and will not provide value to them. You do need to push back against this and if they say no, that’s fine. Paint the picture of the risk, though, by not doing design. It’s very easy for us to build a ML model. It’s hard for us to build a model that someone will actually use to make the world better. And in this case, it’s healthcare or support, intervention support for addicts. “Do you really want a model, or do you want an improvement in the lives of these addicts? That’s ultimately where we’re going with this, and if we don’t do this, the risk of us pushing out an output that doesn’t get used is high. So, design is a gift, not a tax...” - Brian (@rhythmspice) (34:34)
    • “I’d say to anybody out there right now who’s currently working on data science efforts: the sooner you get your people comfortable with the idea of doing Design Thinking, get them implemented into the projects that are currently going on. [...] I think that will be a real game-changer for your data scientists and your organization as a whole...” - Bill (42:19)

    084 - The Messy Truth of Designing and Building a Successful Analytics SAAS Product featuring Jonathan Kay (CEO, Apptopia) Feb 08, 2022

    Building a SAAS business that focuses on building a research tool, more than building a data product, is how Jonathan Kay, CEO and Co-Founder of Apptopia frames his company’s work. Jonathan and I worked together when Apptopia pivoted from its prior business into a mobile intelligence platform for brands. Part of the reason I wanted to have Jonathan talk to you all is because I knew that he would strip away all the easy-to-see shine and varnish from their success and get really candid about what worked…and what hasn’t…during their journey to turn a data product into a successful SAAS business. So get ready: Jonathan is going to reveal the very curvy line that Apptopia has taken to get where they are today.

    In this episode, Jonathan also describes one of the core product design frameworks that Apptopia is currently using to help deliver actionable insights to their customers. For Jonathan, Apptopia’s research-centric approach changes the ways in which their customers can interact with data and is helping eliminate the lull between “the why” and “the actioning” with data.

    Here are some of the key parts of the interview:

    • An introduction to Apptopia and how they serve brands in the world of mobile app data (00:36)
    • The current UX gaps that Apptopia is working to fill (03:32)
    • How Apptopia balances flexibility with ease-of-use (06:22)
    • How Apptopia establishes the boundaries of its product when it’s just one part of a user’s overall workflow (10:06)
    • The challenge of “low use, low trust” and getting “non-data” people to act (13:45)
    • Developing strong conclusions and opinions and presenting them to customers (18:10)
    • How Apptopia’s product design process has evolved when working with data, particularly at the UI level (21:30)
    • The relationship between Apptopia’s buyer, versus the users of the product and how they balance the two (24:45)
    • Jonathan’s advice for hiring good data product design and management staff (29:45)
    • How data fits into Jonathan’s own decision making as CEO of Apptopia (33:21)
    • Jonathan’s advice for emerging data product leaders (36:30)
    Quotes from Today’s Episode
    • “I want to just give you some props on the work that you guys have done and seeing where it's gone from when we worked together. The word grit, I think, is the word that I most associate with you and Eli [former CEO, co-founder] from those times. It felt very genuine that you believed in your mission and you had a long-term vision for it.” - Brian T. O’Neill (@rhythmspice) (02:08)
    • “A research tool gives you the ability to create an input, which might be, ‘I want to see how Netflix is performing.’ And then it gives you a bunch of data. And it gives you good user experience that allows you to look for the answer to the question that’s in your head, but you need to start with a question. You need to know how to manipulate the tool. It requires a huge amount of experience and understanding of the data consumer in order to actually get the answer to the question. For me, that feels like a miss because I think the amount of people who need and can benefit from data, and the amount of people who know how to instrument the tools to get the answers from the data—well, I think there’s a huge disconnect in those numbers. And just like when I take my car to get service, I expected the car mechanic knows exactly what the hell is going on in there, right? Like, our obligation as a data provider should be to help people get closer to the answer. And I think we still have some room to go in order to get there.” - Jonathan Kay (@JonathanCKay) (04:54)
    • “You need to present someone the what, the why, etc.—then the research component [of your data product] is valuable. And so it’s not that having a research tool isn’t valuable. It’s just, you can’t have the whole thing be that. You need to give them the what and the why first.” - Jonathan Kay (@JonathanCKay) (08:45)
    • “You can't put equal resources into everything. Knowing the boundaries of your data product is important, but it's a hard thing to know sometimes where to draw those. A leader has to ask, ‘am I getting outside of my sweet spot? Is this outside of the mission?’ Figuring the right boundaries goes back to customer research.” - Brian T. O’Neill (@rhythmspice) (12:54)
    • “What would I have done differently if I was starting Apptopia today? I would have invested into the quality of the data earlier. I let the product design move me into the clouds a little bit, because sometimes you're designing a product and you're designing visuals, but we were doing it without real data. One of the biggest things that I've learned over a lot of mistakes over a long period of time, is that we've got to incorporate real data in the design process.” - Jonathan Kay (@JonathanCKay) (20:09)
    • “We work with one of the biggest food manufacturer distributors in the world, and they were choosing between us and our biggest competitor, and what they essentially did was [say] “I need to put this report together every two weeks. I used your competitor’s platform during a trial and your platform during the trial, and I was able to do it two hours faster in your platform, so I chose you—because all the other checkboxes were equal. However, at the end of the day, if we could get two hours a week back by using your tool, saving time and saving money and making better decisions, they’re all equal ROI contributors.” - Jonathan Kay on UX (@JonathanCKay) (27:23)
    • “In terms of our product design and management hires, we're typically looking for people who have not worked at one company for 10 years. We've actually found a couple phenomenal designers that went from running their own consulting company to wanting to join full time. That was kind of a big win because one of them had a huge breadth of experience working with a bunch of different products in a bunch of different spaces.”- Jonathan Kay (@JonathanCKay) (30:34)
    • “In terms of how I use data when making decisions for Apptopia, here’s an example. If you break our business down into different personas, my understanding one time was that one of our personas was more stagnant. The data however, did not support that. And so we're having a resource planning meeting, and I'm saying, ‘let's pull back resources a little bit,’ but [my team is] showing me data that says my assumption on that customer segment is actually incorrect. I think entrepreneurs and passionate people need data more because we have so much conviction in our decisions—and because of that,I'm more likely to make bad decisions. Theoretically good entrepreneurs should have good instincts, and you need to trust those, but what I’m saying is, you also need to check those. It's okay to make sure that your instinct is correct, right? And one of the ways that I’ve gotten more mature is by forcing people to show me data to either back up my decision in either direction and being comfortable being wrong. And I am wrong at least half of the time with those things!” - Jonathan Kay (@JonathanCKay) (34:09)

    083 -Why Bob Goodman Thinks Product Management and Design Must Dance Together to Create “Experience Layers” for Data Products Jan 25, 2022

    Design takes many forms and shapes. It is an art, a science, and a method for problem solving. For Bob Goodman, a product management and design executive, the way to view design is as a story and a narrative that conveys the solution to the customer. As a former journalist with 20 years of experience in consumer and enterprise software, Bob has a unique perspective on enabling end-user decision making with data. Having worked in both product management and UX, Bob shapes the narrative on approaching product management and product design as parts of a whole, and we talked about how data products fit into this model. Bob also shares why he believes design and product need to be under the same umbrella to prevent organizational failures. We also discussed the challenges and complexities that come with delivering data-driven insights to end users when ML and analytics are behind the scenes.

    • An overview of Bob’s recent work as an SVP of product management - and why design, UX and product management were unified. (00:47)
    • Bob’s thoughts on centralizing the company data model - and how this data and storytelling are integral to the design process. (06:10)
    • How product managers and data scientists can gain perspective on their work. (12:22)
    • Bob describes a recent dashboard and analytics product, and how customers were involved in its creation. (18:30)
    • How “being wrong” is a method of learning - and a look at what Bob calls the “spotlight challenge.” (23:04)
    • Why productizing data science is challenging. (30:14)
    • Bob’s advice for making trusted data products. (33:46)
    Quotes from Today’s Episode
    • “[I think of] product management and product design as a unified function. How do those work together? There’s that Steve Jobs quote that we all know and love that design is not just what it looks like but it’s also how it works, and when you think of it that way, kind of end-to-end, you start to see product management and product design as a very unified.”- Bob Goodman (@bob_goodman) (01:34)
    • “I have definitely experienced that some people see product management and design and UX is quite separate [...] And this has been a fascinating discovery because I think as a hybrid person, I didn’t necessarily draw those distinctions. [...] From product and design standpoint, I personally was often used to, especially in startup contexts, starting with the data that we had to work with [...]and saying, ‘Oh, this is our object model, and this is where we have context, [...]and this is the end-to-end workflow.’ And I think it’s an evolution of the industry that there’s been more and more specialization, [and] training, and it’s maybe added some barriers that didn’t exist between these disciplines [in the past].”- Bob Goodman (@bob_goodman) (03:30)
    • “So many projects tend to fail because no one can really define what good means at the beginning. The strategy is not clear, the problem set is not clear. If you have a data team that thinks the job is to surface the insights from this data, a designer is thinking about the users’ discrete tasks, feelings, and objectives. They are not there to look at the data set; they are there to answer a question and inform a decision. For example, the objective is not to look at sleep data; it may be to understand, ‘am I’m getting enough rest?’”- Brian T. O’Neill (@rhythmspice) (08:22)
    • “I imagine that when one is fascinated by data, it might be natural to presume that everyone will share this equal fascination with a sort of sleuthing or discovery. And then it’s not the case, It’s TL;DR. And so, often users want the headline, or they even need the kind of headline news to start at a glance. And so this is where this idea of storytelling with data comes in, and some of the research [that helps us] understand the mindset that consumers come to the table with.”- Bob Goodman (@bob_goodman) (09:51)
    • “You were talking about this technologist’s idea of being ‘not user right, but it’s data right.’ I call this technically right, effectively wrong. This is not an infrequent thing that I hear about where the analysis might be sound, or the visualization might technically be the right thing for a certain type of audience. The difference is, are we designing for decision-making or are we designing to display the data that does tell some story, whether or not it informs the human decision-making that we’re trying to support? The latter is what most analytics solutions should strive to be”- Brian T. O’Neill (@rhythmspice) (16:11)
    • “We were working to have a really unified approach and data strategy, and to deliver on that in the best possible way for our clients and our end-users [...]. There are many solutions for custom reports, and drill-downs and data extracts, and we have all manner of data tooling. But in the part that we’re really productizing with an experience layer on top, we’re definitely optimizing on the meaningful part versus the display side [which] maybe is a little bit of a ‘less is more’ type of approach.”- Bob Goodman (@bob_goodman) (17:25)
    • “Delivering insights is simply the topic that we’re starting with, which is just as a user, as a reader, especially a business reader, ‘how much can I intake? And what do I need to make sense of it?’ How declarative can you be, responsibly and appropriately to bring the meaning and the insights forward?There might be a line that’s too much.”- Bob Goodman (@bob_goodman) (33:02)
    Links Referenced
    • LinkedIn: https://www.linkedin.com/in/bobgoodman/

    082 - What the 2021 $1M Squirrel AI Award Winner Wants You To Know About Designing Interpretable Machine Learning Solutions w/ Cynthia Rudin Jan 11, 2022

    Episode Description

    As the conversation around AI continues, Professor Cynthia Rudin, Computer Scientist and Director at the Prediction Analysis Lab at Duke University, is here to discuss interpretable machine learning and her incredible work in this complex and evolving field. To begin, she is the most recent (2021) recipient of the $1M Squirrel AI Award for her work on making machine learning more interpretable to users and ultimately more beneficial to humanity.

    In this episode, we explore the distinction between explainable and interpretable machine learning and how black boxes aren’t necessarily “better” than more interpretable models. Cynthia offers up real-world examples to illustrate her perspective on the role of humans and AI, shares takeaways from her previous work which ranges from predicting criminial recidivism to predicting manhole cover explosions in NYC (yes!). I loved this chat with her because, for one, Cynthia has strong, heavily informed opinions from her concentrated work in this area, and secondly, because Cynthia is thinking about both the end users of ML applications as well as the humans who are “out of the loop,” but nonetheless impacted by the decisions made by the users of these AI systems.

    In this episode, we cover:

    • Background on the Squirrel AI Award – and Cynthia unpacks the differences between Explainable and Interpretable ML. (00:46)
    • Using real-world examples, Cynthia demonstrates why black boxes should be replaced. (04:49)
    • Cynthia’s work on the New York City power grid project, exploding manhole covers, and why it was the messiest dataset she had ever seen. (08:20)
    • A look at the future of machine learning and the value of human interaction as it moves into the next frontier. (15:52)
    • Cynthia’s thoughts on collecting end-user feedback and keeping humans in the loop. (21:46)
    • The current problems Cynthia and her team are exploring—the Roshomon Set, optimal sparse decision trees, sparse linear models, causal inference, and more. (32:33)
    Quotes from Today’s Episode
    • “I’ve been trying to help humanity my whole life with AI, right? But it’s not something I tried to earn because there was no award like this in the field while I was trying to do all of this work. But I was just totally amazed, and honored, and humbled that they chose me.”- Cynthia Rudin on receiving the AAAI Squirrel AI Award. (@cynthiarudin) (1:03)
    • “Instead of trying to replace the black boxes with inherently interpretable models, they were just trying to explain the black box. And when you do this, there's a whole slew of problems with it. First of all, the explanations are not very accurate—they often mislead you. Then you also have problems where the explanation methods are giving more authority to the black box, rather than telling you to replace them.”- Cynthia Rudin (@cynthiarudin) (03:25)
    • “Accuracy at all costs assumes that you have a static dataset and you’re just trying to get as high accuracy as you can on that dataset. [...] But that is not the way we do data science. In data science, if you look at a standard knowledge discovery process, [...] after you run your machine learning technique, you’re supposed to interpret the results and use that information to go back and edit your data and your evaluation metric. And you update your whole process and your whole pipeline based on what you learned. So when people say things like, ‘Accuracy at all costs,’ I’m like, ‘Okay. Well, if you want accuracy for your whole pipeline, maybe you would actually be better off designing a model you can understand.’”- Cynthia Rudin (@cynthiarudin) (11:31)
    • “When people talk about the accuracy-interpretability trade-off, it just makes no sense to me because it’s like, no, it’s actually reversed, right? If you can actually understand what this model is doing, you can troubleshoot it better, and you can get overall better accuracy.“- Cynthia Rudin (@cynthiarudin) (13:59)
    • “Humans and machines obviously do very different things, right? Humans are really good at having a systems-level way of thinking about problems. They can look at a patient and see things that are not in the database and make decisions based on that information, but no human can calculate probabilities really accurately in their heads from large databases. That’s why we use machine learning. So, the goal is to try to use machine learning for what it does best and use the human for what it does best. But if you have a black box, then you’ve effectively cut that off because the human has to basically just trust the black box. They can’t question the reasoning process of it because they don’t know it.”- Cynthia Rudin (@cynthiarudin) (17:42)
    • “Interpretability is not always equated with sparsity. You really have to think about what interpretability means for each domain and design the model to that domain, for that particular user.”- Cynthia Rudin (@cynthiarudin) (19:33)
    • “I think there's sometimes this perception that there's the truth from the data, and then there's everything else that people want to believe about whatever it says.”- Brian T. O’Neill (@rhythmspice) (23:51)
    • “Surveys have their place, but there's a lot of issues with how we design surveys to get information back. And what you said is a great example, which is 7 out of 7 people said, ‘this is a serious event.’ But then you find out that they all said serious for a different reason—and there's a qualitative aspect to that. […] The survey is not going to tell us if we should be capturing some of that information if we don't know to ask a question about that.”- Brian T. O’Neill (@rhythmspice) (28:56)
    Links
    • Squirrel AI Award: https://aaai.org/Pressroom/Releases/release-21-1012.php
    • “Machine Bias”: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    • Users.cs.duke.edu/~cynthia: https://users.cs.duke.edu/~cynthia
    • Teaching: https://users.cs.duke.edu/~cynthia/teaching.html

    081 - The Cultural and $ Benefits of Human-Centered AI in the Enterprise: Digging Into BCG/MIT Sloan’s AI Research w/ François Candelon Dec 28, 2021

    Episode Description

    The relationship between humans and artificial intelligence has been an intricate topic of conversation across many industries. François Candelon, Global Director at Boston Consulting Group Henderson Institute, has been a significant contributor to that conversation, most notably through an annual research initiative that BCG and MIT Sloan Management Review have been conducting about AI in the enterprise. In this episode, we’re digging particularly into the findings of the 2020 and 2021 studies that were just published at the time of this recording.

    Through these yearly findings, the study has shown that organizations with the most competitive advantage are the ones that are focused on effectively designing AI-driven applications around the humans in the loop. As these organizations continue to generate value with AI, the gap between them and companies that do not embrace AI has only increased. To close this gap, companies will have to learn to design trustworthy AI applications that actually get used, produce value, and are designed around mutual learning between the technology and users. François claims that a “human plus AI” approach —what former Experiencing Data guest Ben Schneiderman calls HCAI (see Ep. 062)—can create organizational learning, trust, and improved productivity.

    In this episode, we cover:

    • How the Henderson Institute is conducting its multi-year study with MIT Sloan Management Review. (00:43)
    • The core findings of the 2020 study, what the 10/20/70 rule is, and how Francois uses it to determine a company’s level of successful deployment of AI, and specific examples of what leading companies are doing in terms of user experience around AI. (03:08)
    • The core findings of the 2021 study, and how mutual learning between human and machine (i.e. the experience of learning from and contributing to ML applications) increases the success rate of AI deployments. (07:53)
    • The AI driving license for CxOs: A discussion about the gap between C-suite and data scientists and why it’s critical for teams to be agile and integrate both capabilities. (14:44)
    • Why companies should embed AI as the core of their operating process. (22:07)
    • François’ perspective on leveraging AI and why it is meant to solve problems and impact cultural change. (29:28)
    Quotes from Today’s Episode
    • “What makes the real difference is when you have what we call organizational learning, which means that at the same time you learn from AI as an individual, as a human, AI will learn from you. And this is relatively easy to understand because as we’re in a world, which is always more uncertain, the rate of learning, the ability for an organization to learn, is one of the most important competitive advantages.”- François Candelon (04:58)
    • “When there is an additional effectiveness linked to AI, people will feel more comfortable, will feel augmented, not replaced, and then they will trust AI. As they trust, they are ready to have additional use cases implemented and therefore you are entering into a virtuous cycle.”- François Candelon (08:06)
    • “If you try to optimize human plus AI and build on their respective capabilities—humans are much better at dealing with ambiguity and AI deals with large amounts of data, If you’re able to combine both, then you’re in a situation to be ready to create a source of competitive advantage.”- François Candelon (09:36)
    • “I think that’s largely the point of my show and what I’m trying to focus on is to talk to the people who do want to go beyond the technical work. Building technically, right, effectively wrong solutions is something nobody needs, and at some point, not only is it not good for your career, but you might find it more rewarding to work on things that actually matter, that get used, that go into the world, that produce value. It’s more personally gratifying, not just for the business, but yourself.”- Brian T. O’Neill (@rhythmspice) (20:55)
    • “Making sure that AI becomes the core of your operating process and your operating model [is] very important. I think that very often companies ask themselves, ‘how could AI help me optimize my process?’ I believe that they should now move—or at least the most advanced—are now moving to, ‘how should I make sure that I redesign my process to get the full potential of AI, to bring AI at the core of my operating model?’”- François Candelon (24:40)
    • “AI is a way to solve problems, not an objective in itself. So, this is why when I used to say we are an AI-enabled or an AI-powered company, it shows a capability. It shows a way of thinking and the ability to deal with the foundational capabilities of AI. It’s not something else. And this is why—for the data scientists that will be open to better understanding business—they will learn a lot, and it will be very enlightening to be able to solve these issues and to solve these problems.”- François Candelon (30:51)
    • “The human in the loops matter, folks. For now at least, we’re still here. It’s not all machines running machines. So, you have to figure out the human-machine interaction. It’s not going away, and so when you’re ready, it’s time to face that we need to design for the human in the loop, and we need to think about the last mile, and we need to think about change, adoption, and all the human factors that go into the solution, as well as the technologies.”- Brian T. O’Neill (@rhythmspice) (35:35)
    Links
    • BCG Henderson Institute: https://bcghendersoninstitute.com/
    • François on LinkedIn: https://www.linkedin.com/in/françois-candelon

    080 – How to Measure the Impact of Data Products…and Anything Else with Forecasting and Measurement Expert Doug Hubbard Dec 14, 2021

    Finding it hard to know the value of your data products on the business or your end users? Do you struggle to understand the impact your data science, analytics, or product team is having on the people they serve?

    Many times, the challenge comes down to figuring out WHAT to measure, and HOW. Clients, users, and customers often don’t even know what the right success or progress metrics are, let alone how to quantify them. Learning how to measure what might seem impossible is a highly valuable skill for leaders who want to track their progress with data—but it’s not all black and white. It’s not always about “more data,” and measurement is also not about “the finite, right answer.” Analytical minds, ready to embrace subjectivity and uncertainty in this episode!

    In this insightful chat, Doug and I explore examples from his book, How to Measure Anything, and we discuss its applicability to the world of data and data products. From defining trust to identifying cognitive biases in qualitative research, Doug shares how he views the world in ways that we can actually measure. We also discuss the relationship between data and uncertainty, forecasting, and why people who are trying to measure something usually believe they have a lot less data than they really do.

    Episode Description
    • A discussion about measurement, defining “trust”, and why it is important to collect data in a systematic way. (01:35)
    • Doug explores “concept, object and methods of measurement” - and why most people have more data than they realize when investigating questions. (09:29)
    • Why asking the right questions is more important than “needing to be the expert” - and a look at cognitive biases. (16:46)
    • The Dunning-Kruger effect and how it applies to the way people measure outcomes - and Bob discusses progress metrics vs success metrics and the illusion of cognition. (25:13)
    • How one of the challenges with machine learning also creates valuable skepticism - and the three criteria for experience to convert into learning. (35:35)
    Quotes from Today’s Episode
    • “Often things like trustworthiness or collaboration, or innovation, or any—all the squishy stuff, they sound hard to measure because they’re actually an umbrella term that bundles a bunch of different things together, and you have to unpack it to figure out what it is you’re talking about. It’s the beginning of all scientific inquiry is to figure out what your terms mean; what question are you even asking?”- Doug Hubbard (@hdr_frm) (02:33)

    • “Another interesting phenomenon about measurement in general and uncertainty, is that it’s in the cases where you have a lot of uncertainty when you don’t need many data points to greatly reduce it. [People] might assume that if [they] have a lot of uncertainty about something, that [they are] going to need a lot of data to offset that uncertainty. Mathematically speaking, just the opposite is true. The more uncertainty you have, the bigger uncertainty reduction you get from the first observation. In other words, if, you know almost nothing, almost anything will tell you something. That’s the way to think of it.”- Doug Hubbard (@hdr_frm) (07:05)

    • “I think one of the big takeaways there that I want my audience to hear is that if we start thinking about when we’re building these solutions, particularly analytics and decision support applications, instead of thinking about it as we’re trying to give the perfect answer here, or the model needs to be as accurate as possible, changing the framing to be, ‘if we went from something like a wild-ass guess, to maybe my experience and my intuition, to some level of data, what we’re doing here is we’re chipping away at the uncertainty, right?’ We’re not trying to go from zero to 100. Zero to 20 may be a substantial improvement if we can just get rid of some of that uncertainty, because no solution will ever predict the future perfectly, so let’s just try to reduce some of that uncertainty.”- Brian T. O’Neill (@rhythmspice) (08:40)

    • “So, this is really important: [...] you have more data than you think, and you need less than you think. People just throw up their hands far too quickly when it comes to measurement problems. They just say, ‘Well, we don’t have enough data for that.’ Well, did you look? Tell me how much time you spent actually thinking about the problem or did you just give up too soon? [...] Assume there is a way to measure it, and the constraint is that you just haven’t thought of it yet. ”- Doug Hubbard (@hdr_frm) (15:37)
    • “I think people routinely believe they have a lot less data than they really do. They tend to believe that each situation is more unique than it really is [to the point] that you can’t extrapolate anything from prior observations. If that were really true, your experience means nothing.”- Doug Hubbard (@hdr_frm) (29:42)

    • “When you have a lot of uncertainty, that’s exactly when you don’t need a lot of data to reduce it significantly. That’s the general rule of thumb here. [...] If what we’re trying to improve upon is just the subjective judgment of the stakeholders, all the research today—and by the way, here’s another area where there’s tons of data—there’s literally hundreds of studies where naive statistical models are compared to human experts […] and the consistent finding is that even naive statistical models outperform human experts in a surprising variety of fields.”- Doug Hubbard (@hdr_frm) (32:50)
    Links Referenced
    • How to Measure Anything: https://www.amazon.com/gp/product/1118539273/
    • Hubbard Decision Research: https://hubbardresearch.com

    079 - How Sisu’s CPO, Berit Hoffmann, Is Approaching the Design of Their Analytics Product…and the UX Mistakes She Won’t Make Again Nov 30, 2021

    Berit Hoffmann, Chief Product Officer at Sisu, tackles design from a customer-centric perspective with a focus on finding problems at their source and enabling decision making. However, she had to learn some lessons the hard way along the road, and in this episode, we dig into those experiences and what she’s now doing differently in her current role as a CPO.

    In particular, Berit reflects on her “ivory tower design” experience at a past startup called Bebop. In that time, she quickly realized the importance of engaging with customer needs and building intuitive and simple solutions for complex problems. Berit also discusses the Double Diamond Process and how it shapes her own decision-making and the various ways she carries her work at Sisu.

    In this episode, we also cover:

    • How Berit’s “ivory tower design experience” at Bebop taught her the importance of dedicating time to focus on the customer. (01:31)
    • What Berit looked for as she researched Sisu prior to joining - and how she and Peter Bailis, Founder and CEO, share the same philosophy on what a product’s user experience should look like. (03:57)
    • Berit discusses the Double Diamond Process and the life cycle of designing a project - and shares her take on designing for decision making. (10:17)
    • Sisu’s shift from answering the why to the what - and how they approach user testing using product as a metric layer. (19:10)
    • Berit explores the tension that can arise when designing a decision support tool. (31:03)
    Quotes from Today’s Episode
    • “I kind of learned the hard way, the importance of spending that time with customers upfront and really digging into understanding what problems are most challenging for them. Those are the problems to solve, not the ones that you as a product manager or as a designer think are most important. It is a lesson I carry forward with me in terms of how I approach anything I'm going to work on now. The sooner I can get it in front of users, the sooner I can get feedback and really validate or invalidate my assumptions, the better because they're probably going to tell me why I'm wrong.”- Berit Hoffmann (03:15)
    • “As a designer and product thinker, the problem finding is almost more important than the solutioning because the solution is easy when you really understand the need. It's not hard to come up with good solutions when the need is so clear, which you can only get through conversation, inquiry, shadowing, and similar research and design methods.” - Brian T. O’Neill (@rhythmspice) (10:54)
    • “Decision-making is a human process. There's no world in which you're going to spit out an answer and say, ‘just go do it.’ Software is always going to be missing the rich context and expertise that humans have about their business and the context in which they're making the decision. So, what that says to me is inherently, decision-making is also going to be an iterative process. [...] What I think technology can do is it can automate and accelerate a lot of the manual repetitive steps in the analysis that are taking up a bunch of time today. Especially as data is getting exponentially more complex and multi-dimensional.”- Berit Hoffmann (17:44)
    • “When we talk to people about solving problems, 9 out of 10 people say they would add something to whatever it is that you're making to make it better. So often, when designers think about modernism, it is very much about ‘what can I take away that will help it make it better?’ And, I think this gets lost. The tendency with data, when you think about how much we're collecting and the scale of it, is that adding it is always going to make it better and it doesn't make it better all the time. It can slow things down and cause noise. It can make people ask even more questions. When in reality, the goal is to make a decision.”- Brian T. O’Neill (@rhythmspice) (30:11)
    • “I’m trying to resist the urge to get industry-specific or metric specific in any of the kind of baseline functionality in the product. And instead, say that we can experiment in a lightweight way in terms of outside of the product, health content, guidance on best practices, etc. That is going to be a constant tension because the types of decisions that you enact and the types of questions you're digging into are really different depending on whether you're a massive hotel chain compared to a quick-service restaurant compared to a B2B SAAS company. The personas and the questions are so different. So that's a tension that I think is really interesting when you think about the decision-making workflow and who those stakeholders are.”- Berit Hoffmann (32:05)
    Links Referenced
    • Sisu: https://sisudata.com
    • Berit Hoffmann on LinkedIn: https://www.linkedin.com/in/Hoffmannn-berit/
    • Sisu on LinkedIn: https://www.linkedin.com/company/sisu-data/

    078 - From Data to Product: What is Data Product Management and Why Do We Need It with Eric Weber Nov 16, 2021

    Eric Weber, Head of Data Product at Yelp, has spent his career developing a product-minded approach to producing data-driven solutions that actually deliver value. For Eric, developing a data product mindset is still quite new and today, we’re digging into all things “data product management” and why thinking of data with a product mindset matters.

    In our conversation, Eric defines what data products are and explains the value that data product managers can bring to their companies. Eric’s own ethos on centering on empathy, while equally balanced with technical credibility, is central to his perspectives on data product management. We also discussed how Eric is bringing all of this to hand at Yelp and the various ways they’re tackling their customers' data product needs.

    In this episode, we also cover:

    • What is a data product and why do we need data product management? (01:34)
    • Why successful data product managers carry two important traits - empathy and technical credibility. (10:47)
    • A discussion about the levels of problem-solving maturity, the challenge behind delivering solutions, and where product managers can be the most effective during the process. (16:54)
    • A look at Yelp’s customer research strategy and what they are focusing on to optimize the user experience. (21:28)
    • How Yelp’s product strategy is influenced by classes of problems – and Yelp’s layers of experimentation. (27:38)
    • Eric reflects on unlearning and talks about his newsletter, From Data to Product. (34:36)
    Quotes from Today’s Episode
    • “Data products bring companies a way to think about the long-term viability and sustainability of their data investments. [...] And part of that is creating things that are sustainable, that have a strategy, that have a customer in mind. And a lot of these things people do - maybe they don't call it out explicitly, but this is a packaging that I think focuses us in the right places rather than hoping for the best.”- Eric Weber (@edweber1) (02:43)
    • “My hypothesis right now is that by introducing [product management] as a role, you create a vision for our product that is not just tied to a person, it's not just tied to a moment in time of the company. It's something where you can actually have another product manager come in and understand where things are headed. I think that is really the key to seeing the 10 to 20-year sustainability, other than crossing your fingers and hoping that one person stays for a long time, which is kind of a tough bet in this environment.”- Eric Weber (@edweber1) (07:27)
    • “My background is in design and one of the things that I have to work on a lot with my clients and with data scientists in particular, is getting out of the head of wanting to work on “the thing” and learning how to fall in love with the customer's problem and their need. And this whole idea of empathy, not being a squishy thing, but do you want your work to matter? Or, do you just write code or work on models all day long and you don't care if it ships and makes a difference? I think good product-minded people care a lot about that outcome. So, this output versus outcome thing is a mindset change that has to happen.”- Brian T. O’Neill (@rhythmspice) (10:56)
    • “The question about whether you focus on internal development or external buying often goes back to, what is your business trying to do? And how much is this going to cost us over time? And it's fascinating because I want [anyone listening] to come across [the data product] field as an area in motion. It's probably going to look pretty different a year from now, which I find pretty awesome and fascinating myself.”- Eric Weber (@edweber1) (27:02)
    • “If you don't have a deep understanding of what your customer is trying to do and are able to abstract it to some general class of problem, you're probably going to end up building a solution that's too narrow and not sustainable because it will solve something in the short term. But, what if you have to re-architect the whole thing? That's where it becomes really expensive and where having a product strategy pays off.”- Eric Weber (@edweber1) (31:28)
    • “I've had to unlearn that idea that I need to create a definitive framework of what someone does. I just need to be able to put on different lenses. [For example] if I'm talking to design today, these are probably the things that they're going to be focused on and concerned about. If I'm talking to our executive team, this is probably how they're going to break this problem down and look at it. So, I think it's not necessarily dropping certain frameworks, it's being able to understand that some of them are useful in certain scenarios and they're not in others. And that ability is something that I think has created this chance for me to look at the data product from different spaces and think about why it might be valuable.”- Eric Weber (@edweber1) (35:54)
    Links
    • Subscribe to “From Data to Product” on Substack
    • LinkedIn
    • Yelp

    077 - Productizing Analytics for Performing Arts Organizations with AMS Analytics CPO Jordan Gross Richmond Nov 02, 2021

    Even in the performing arts world, data and analytics is serving a purpose. Jordan Gross Richmond is the Chief Product Officer at AMS Analytics, where they provide benchmarking and performance reporting to performing arts organizations. As many of you know, I’m also a musician who tours and performs in the performing arts market and so I was curious to hear how data plays a role “off the stage” within these organizations. In particular, I wanted to know how Jordan designed the interfaces for AMS Analytics’s product, and what’s unique (or not!) about using data to manage arts organizations.

    Jordan also talks about the beginnings of AMS and their relationship with leaders in the performing arts industry and the “birth of benchmarking” in this space. From an almost manual process in the beginning, AMS now has a SaaS platform that allows performing arts centers to see the data that helps drive their organizations. Given that many performing arts centers are non-profit organizations, I also asked Jordan about how these organizations balance their artistic mission against the colder, harder facts of data such as ticket sales, revenue, and “the competition.”

    In this episode, we also cover:

    • How the AMS platform helps leaders manage their performing arts centers and the evolution of the AMS business model. (01:10)
    • Benchmarking as a measure of success in the performing arts industry and the “two buckets of context” AMS focuses on. (06:00)
    • Strategies for measuring intangible success and how performing arts data is about more than just the number of seats filled at concerts and shows. (15:48)
    • The relationships between AMS and its customers, their organizational structure, and how AMS has shaped it into a useful SaaS product. (26:27)
    • The role of users in designing the solution and soliciting feedback and what Jordan means when he says he “focuses on the problems, and not the solutions” in his role as Chief Product Officer. (35:38)
    Quotes from Today’s Episode
    • “I think [AMS] is a one-of-a-kind thing, and what it does now is it provides what I consider to be a steering wheel for these leaders. It’s not the kind of thing that’s going to help anybody figure out what to do tomorrow; it’s more about what’s going on in a year from now and in five years from now. And I think the need for this particular vision comes from the evolution in the business model in general of the performing arts and the cultural arts in America.”- Jordan Gross Richmond (@the1jordangross) (03:07)
    • “No one metric can solve everything. It’s a one-to-one relationship in terms of data model to analytical point. So, we have to be really careful that we don't think that just because there's a lot of charts on the screen, we must be able to answer all of our [customers'] questions.”- Jordan Gross Richmond (@the1jordangross) (18:18)
    • “We are absolutely a product-led organization, which essentially means that the solutions are built into the product, and the relationship with the clients and the relationship with future clients is actually all engineered into the product itself. And so I never want to create anything in a black box. Nobody benefits from a feature that nobody cares about.”- Jordan Gross Richmond (@the1jordangross) (29:16)
    • “This is an evolution that's driven not by the technology itself, but [...] by the key stakeholders amongst this community. And we found that to be really successful. In terms of product line growth, when you listen to your users and they feel heard, the sky's the limit. Because at that point, they have buy-in, so you have a real relationship. ”- Jordan Gross Richmond (@the1jordangross) (31:11)
    • “Successful product leaders don't focus on the solutions. We focus on the problems. And that's where I like to stay, because sometimes we kind of get into lots of proposals. My role in these meetings is often to help identify the problem and make sure we're all solving the same problem because we can get off pretty easily on a solution that sounds sexy [or] interesting, but if we're not careful, we might be solving a problem that doesn't even exist.”- Jordan Gross Richmond (@the1jordangross) (35:09)
    • “It’s about starting with the customer’s problems and working backwards from that. I think that you have to start with the problem space that they're in, and then you do the best job you can with the data that's available. [...] So, I love the fact that you're having these working groups. Sometimes we call these design partners in the design world, and I think that kind of regular interaction and exposure, especially early and as frequently as possible, is a great habit.”- Brian T. O’Neill (@rhythmspice) (40:26)
    Links Referenced

    https://www.ams-analytics.com/


    076 - How Bedrock’s “Data by Design” Mantra Helps Them Build Human-Centered Solutions with Jesús Templado Oct 19, 2021

    Why do we need or care about design in the work of data science? Jesús Templado, Managing Director at Bedrock, is here to tell us about how Bedrock executes their mantra, “data by design.”

    Bedrock has found ways to bring to their clients a design-driven, human-centered approach by utilizing a “hybrid model” to synthesize technical possibilities with human needs. In this episode, we explore Bedrock’s vision for how to achieve this synthesis as part of the firm’s DNA, and how Bedrock adopted their vision to make data more approachable with the client being central to their design efforts. Jesús also discusses a time when he championed making “data by design” a successful strategy with a large chain of hotels, and he offers insight on how making clients feel validated and heard plays a part.

    In our chat, we also covered:

    • “Data by design”and how Bedrock implements this design-driven approach. (00:43)
    • Bedrock’s vision for how they support their clients and why design has always been part of their DNA. (08:53)
    • Jesús shares a time when he successfully implemented a design process for a large chain of hotels, and some of the challenges that came with that approach. (14:47)
    • The importance of making clients feel heard by dedicating time to research and UX and how the team navigates conversations about risk with customers. (24:12)
    • More on the client experience and how Bedrock covers a large spectrum of areas to ensure that they deliver a product that makes sense for the customer. (33:01)
    • Jesús’ opinion on why companies should consider change management when building products and systems - and a look at the Data Stand-Up podcast (35:42)
    Quotes from Today’s Episode

    “Many people in corporations don’t have the technical background to understand the possibilities when it comes to analyzing or using data. So, bringing a design-based framework, such as design thinking, is really important for all of the work that we do for our clients.” - Jesús Templado (2:33)

    “We’ve mentioned “data by design” before as our mantra; we very much prefer building long-lasting relationships based on [understanding] our clients' business and their strategic goals. We then design and ideate an implementation roadmap with them and then based on that, we tackle different periods for building different models. But we build the models because we understand what’s going to bring us an outcome for the business—not because the business brings us in to deliver only a model for the sake of predicting what the weather is going to be in two weeks.”- Jesús Templado (14:07)

    “I think as consultants and people in service, it’s always nice to make friends. And, I like when I can call a client a friend, but I feel like I’m really here to help them deliver a better future state [...] And the road may be bumpy, especially if design is a new thing. And it is often new; in the context of data science and analytics projects.”- Brian T. O’Neill (@rhythmspice) (26:49)

    “When we do data science [...] that’s a means to an end. We do believe it’s important that the client understands the reasoning behind everything that we do and build, but at the end of the day, it’s about understanding that business problem, understanding the challenge that the company is facing, knowing what the expected outcome is, and knowing how you will deliver or predict that outcome to be used for something meaningful and relevant for the business.”- Jesús Templado (33:06)

    “The appetite for innovation is high, but a lot of the companies that want to do it are more concerned about risk. Risk and innovation are at opposite ends of the spectrum. And so, if you want to be innovative, by definition—you’re signing up for failure on the way to success. [...] It’s about embracing an iterative process, it’s about getting feedback along the way, it’s about knowing that we don’t know everything, and we’re signing up for that ambiguity along the way to something better.”- Brian T. O’Neill (@rhythmspice) (38:20)

    Links Referenced
    • Bedrock: https://bedrockdbd.com
    • Data Stand-Up podcast: https://bedrockdbd.com/podcast/
    • LinkedIn: https://www.linkedin.com/in/Jesústg/

    075 - How CDW is Integrating Design Into Its Data Science and Analytics Teams with Prasad Vadlamani Oct 05, 2021

    How do we get the most breadth out of design and designers when building data products? One way is to have designers be at the front leading the charge when it comes to creating data products that must be useful, usable, and valuable.

    For this episode Prasad Vadlamani, CDW’s Director of Data Science and Advanced Analytics, joins us for a chat about how they are making design a larger focus of how they create useful, usable data products. Prasad talks about the importance of making technology—including AI-driven solutions—human centered, and how CDW tries to keep the end user in mind.

    Prasad and I also discuss his perspectives on how to build designers into a data product team and how to successfully navigate the grey areas between various areas of expertise. When this is done well, then the entire team can work with each other's strengths and advantages to create a more robust product. We also discuss the role a UI-free user experience plays in some data products, some differences between external and internally-facing solutions, and some of Prasad’s valuable takeaways that have helped to shape the way he thinks design, data science, and analytics can collaborate.

    In our chat, we covered:

    • Prasad’s first introduction to designers and how he leverages the disciplines of design and product in his data science and analytics work (1:09)
    • The terminology behind product manager and designer and how these functions play a role in an enterprise AI team (5:18)
    • How teams can use their wide range of competencies to their advantage (8:52)
    • A look at one UI-less experience and the value of the “invisible interface” (14:58)
    • Understanding the model development process and why the model takes up only a small percentage of the effort required to successfully bring a data product to end users (20:52)
    • The differences between building an internal vs external product, what to consider, and Prasad’s “customer zero” approach. (29.17)
    • Expectations Prasad sets with customers (stakeholders) about the life expectancy of data products when they are in their early stage of development (35:02)

    074 - Why a Former Microsoft ML/AI Researcher Turned to Design to Create Intelligent Products from Messy Data with Abhay Agarwal, Founder of Polytopal Sep 21, 2021

    Episode Description

    The challenges of design and AI are exciting ones to face. The key to being successful in that space lies in many places, but one of the most important is instituting the right design language.

    For Abhay Agarwal, Founder of Polytopal, when he began to think about design during his time at Microsoft working on systems to help the visually impared, he realized the necessity of a design language for AI. Stepping away from that experience, he leaned into how to create a new methodology of design centered around human needs. His efforts have helped shift the lens of design towards how people solve problems.

    In this episode, Abhay and I go into details on a snippet from his course page for the Stanford d. where he claimed that “the foreseeable future would not be well designed, given the difficulty of collaboration between disciplines.” Abhay breaks down how he thinks his design language for AI should work and how to build it out so that everyone in an organization can come to a more robust understanding of AI. We also discuss the future of designers and AI and the ebb and flow of changing, learning, and moving forward with the AI narrative.

    In our chat, we covered:

    • Abhay’s background in AI research and what happened to make him move towards design as a method to produce intelligence from messy data. (1:01)
    • Why Abhay has come up with a new design language called Lingua Franca for machine learning products [and his course on this at Stanford’s d.school]. (3:21)
    • How to become more human-centered when building AI products, what ethnographers can uncover, and some of Abhay’s real-world examples. (8:06)
    • Biases in design and the challenges in developing a shared language for both designers and AI engineers. (15:59)
    • Discussing interpretability within black box models using music recommendation systems, like Spotify, as an example. (19:53)
    • How “unlearning” solves one of the biggest challenges teams face when collaborating and engaging with each other. (27:19)
    • How Abhay is shaping the field of design and ML/AI -- and what’s in store for Lingua Franca. (35:45)
    Quotes from Today's Episode

    “I certainly don’t think that one needs to hit the books on design thinking or listen to a design thinker describe their process in order to get the fundamentals of a human-centered design process. I personally think it’s something that one can describe to you within the span of a single conversation, and someone who is listening to that can then interpret that and say, ‘Okay well, what am I doing that could be more human-centered?’ In the AI space, I think this is the perennial question.” - Abhay Agarwal (@Denizen_Kane) (6:30)

    “Show me a company where designers feel at an equivalent level to AI engineers when brainstorming technology? It just doesn’t happen. There’s a future state that I want us to get to that I think is along those lines. And so, I personally see this as, kind of, a community-wide discussion, engagement, and multi-strategy approach.” - Abhay Agarwal (@Denizen_Kane) (18:25)

    “[Discussing ML data labeling for music recommenders] I was just watching a video about drum and bass production, and they were talking about, “Or you can write your bass lines like this”—and they call it reggaeton. And it’s not really reggaeton at all, which was really born in Puerto Rico. And Brazil does the same thing with their versions of reggae. It’s not the one-drop reggae we think of Bob Marley and Jamaica. So already, we’ve got labeling issues—and they’re not even wrong; it’s just that that’s the way one person might interpret what these musical terms mean” - Brian O’Neill (@rhythmspice) (25:45)

    “There is a new kind of hybrid role that is emerging that we play into...which is an AI designer, someone who is very proficient with understanding the dynamics of AI systems. The same way that we have digital UX designers, app designers—there had to be apps before they could be app designers—there is now AI, and then there can thus be AI designers.” - Abhay Agarwal (@Denizen_Kane) (33:47)

    Links Referenced
    • Lingua Franca: https://linguafranca.polytopal.ai
    • Polytopal.ai: https://polytopal.ai
    • Polytopal email: hello@polytopal.ai
    • LinkedIn: https://www.linkedin.com/in/abhaykagarwal/
    • Personal Twitter: https://twitter.com/Denizen_Kane
    • Polytopal Twitter: https://twitter.com/polytopal_ai

    073 - Addressing the Functional and Emotional Needs of Users When Designing Data Products with Param Venkataraman Sep 07, 2021

    Episode Description

    Simply put, data products help users make better decisions and solve problems with information. But how effective can data products be if designers don’t take the time to explore the complete needs of users?

    To Param Venkataraman, Chief Design Officer at Fractal Analytics, having an understanding of the “human dimension” of a problem is crucial to creating data solutions that create impact.

    On this episode of Experiencing Data, Param and I talk more about his concept of ‘attractive non-conscious design,’ the core skills of a professional designer, and why Fractal has a c-suite design officer and is making large investments in UX.

    In our chat, we covered:

    • Param's role as Chief Design Officer at Fractal Analytics, and the company's sharp focus on the 'human dimension' of enterprise data products. (2:04)
    • 'Attractive non-conscious design': Creating easy-to-use, 'delightful' data products that help end-users make better decisions by focusing on their needs. (5:32)
    • The importance of understanding the 'emotional need' of users when designing enterprise data products. (9:07)
    • Why designers as well as data science and analytics teams should focus more on the emotional and human element when building data products. (16:15)
    • 'The next version of design': Why and how Param believes the classic design thinking model must adapt to the 'post-data science world.' (21:39)
    • The core competencies of a professional designer and how it relates to data products. (25:59)
    • Why non-designers should learn the principles of good design — and how Fractal’s internal Phi Design System helps frame problems from the perspective of a data product's end-user, leading to better solutions. (27:51)
    • Why Param believes the coming together of design and data still needs time to mature. (33:40)
    Quotes from Today’s Episode

    “When you look at analytics and the AI space … there is so much that is about how do you use ... machine learning … [or] any other analytics technology or solutions — and how do you make better effective decisions? That’s at the heart of it, which is how do we make better decisions?” - Param Venkataraman (@onwardparam) (6:23)

    “[When it comes to business software,] most of it should be invisible; you shouldn’t really notice it. And if you’re starting to notice it, you’re probably drawing attention to the wrong thing because you’re taking people out of flow.” - Brian O’Neill (@rhythmspice) (8:57)

    “Design is kind of messy … there’s sort of a process ... but it’s not always linear, and we don’t always start at step zero. … You might come into something that’s halfway done and the first thing we do is run a usability study on a competitor’s thing, or on what we have now, and then we go back to step two, and then we go to five. It’s not serial, and it’s kind of messy, and that’s normal.” - Brian O’Neill (@rhythmspice) (16:18)

    “Just like design is iterative, data science also is very iterative. There’s the idea of hypothesis, and there’s an idea of building and experimenting, and then you sort of learn and your algorithm learns, and then you get better and better at it.” - Param Venkataraman (@onwardparam) (18:05)

    “The world of data science is not used to thinking in terms of emotion, experience, and the so-called softer aspects of things, which in my opinion, is not actually the softer; it’s actually the hardest part. It’s harder to dimensionalize emotion, experience, and behavior, which is … extremely complex, extremely layered, [and] extremely unpredictable. … I think the more we can bring those two worlds together, the world of evidence, the world of data, the world of quantitative information with the qualitative, emotional, and experiential, I think that’s where the magic is.” - Param Venkataraman (@onwardparam) (21:02)

    “I think the coming together of design and data is... a new thing. It’s unprecedented. It’s a bit like how the internet was a new thing back in the mid ’90s. We were all astounded by it, we didn’t know what to do with it, and everybody was just fascinated with it. And we just knew that it’s going to change the world in some way. … Design and data will take some time to mature, and what’s more important is to go into it with an open mind and experiment. And I’m saying this for both designers as well as data scientists, to try and see how the right model might evolve as we experiment and learn.” - Param Venkataraman (@onwardparam) (33:58)

    Links Referenced
    • Fractal Analytics: https://fractal.ai
    • LinkedIn: https://www.linkedin.com/in/parameswaranv/
    • Twitter: https://twitter.com/onwardparam

    072 - How to Get Stakeholders to Reveal What They Really Need From a Data Product with Cindy Dishmey Montgomery Aug 24, 2021

    Episode Description

    How do you extract the real, unarticulated needs from a stakeholder or user who comes to you asking for AI, a specific app feature, or a dashboard?

    On this episode of Experiencing Data, Cindy Dishmey Montgomery, Head of Data Strategy for Global Real Assets at Morgan Stanley, was gracious enough to let me put her on the spot and simulate a conversation between a data product leader and customer.

    I played the customer, and she did a great job helping me think differently about what I was asking her to produce for me — so that I would be getting an outcome in the end, and not just an output. We didn’t practice or plan this exercise, it just happened — and she handled it like a pro! I wasn’t surprised; her product and user-first approach told me that she had a lot to share with you, and indeed she did!

    A computer scientist by training, Cindy has worked in data, analytics and BI roles at other major companies, such as Revantage, a Blackstone real estate portfolio company, and Goldman Sachs. Cindy was also named one of the 2021 Notable Women on Wall Street by Crain’s New York Business.

    Cindy and I also talked about the “T” framework she uses to achieve high-level business goals, as well as the importance for data teams to build trust with end-users.

    In our chat, we covered:

    • Bringing product management strategies to the creation of data products to build adoption and drive value. (0:56)
    • Why the first data hire when building an internal data product should be a senior leader who is comfortable with pushing back. (3:54)
    • The "T" Framework: How Cindy, as Head of Data Strategy, Global Real Assets at Morgan Stanley, works to achieve high-level business goals. (8:48)
    • How building trust with internal stakeholders by creating valuable and smaller data products is key to eventually working on bigger data projects. (12:38)
    • How data's role in business is still not fully understood. (18:17)
    • The importance for data teams to understand a stakeholder's business problem and also design a data product solution in collaboration with them. (24:13)
    • 'Where's the why': Cindy and Brian roleplay as a data product manager and a customer, respectively, and simulate how to successfully identify a customer’s problem and also open them up to new solutions. (28:01)
    • The benefits of a data product management role — and why 'everyone should understand product.' (33:49)
    Quotes from Today’s Episode

    “There’s just so many good constructs in the product management world that we have not yet really brought very close to the data world. We tend to start with the skill sets, and the tools, and the ML/AI … all the buzzwords. [...]But brass tacks: when you have a happy set of consumers of your data products, you’re creating real value.” - Cindy Dishmey Montgomery (1:55)

    “The path to value lies through adoption and adoption lies through giving people something that actually helps them do their work, which means you need to understand what the problem space is, and that may not be written down anywhere because they’re voicing the need as a solution.” - Brian O’Neill (@rhythmspice) (4:07)

    “I think our data community tends to over-promise and under-deliver as a way to get the interest, which it’s actually quite successful when you have this notion of, ‘If you build AI, profit will come.’ But that is a really, really hard promise to make and keep.” - Cindy Dishmey Montgomery (12:14)

    “[Creating a data product for a stakeholder is] definitely something where you have to be close to the business problem and design it together. … The struggle is making sure organizations know when the right time and what the right first hire is to start that process.” - Cindy Dishmey Montgomery (23:58)

    “The temporal aspect of design is something that’s often missing. We talk a lot about the artifacts: the Excel sheet, the dashboard, the thing, and not always about when the thing is used.” - Brian O’Neill (@rhythmspice) (27:27)

    “Everyone should understand product. And even just creating the language of product is very helpful in creating a center of gravity for everyone. It’s where we invest time, it’s how it’s meant to connect to a certain piece of value in the business strategy. It’s a really great forcing mechanism to create an environment where everyone thinks in terms of value. And the thing that helps us get to value, that’s the data product.” - Cindy Dishmey Montgomery (34:22)

    Links Referenced
    • LinkedIn: https://www.linkedin.com/in/cindy-dishmey/

    071 - The ROI of UX Research and How It Applies to Data Products with Bill Albert Aug 10, 2021

    There are many benefits in talking with end users and stakeholders about their needs and pain points before designing a data product.

    Just take it from Bill Albert, executive director of the Bentley University User Experience Center, author of Measuring the User Experience, and my guest for this week’s episode of Experiencing Data. With a career spanning more than 20 years in user experience research, design, and strategy, Bill has some great insights on how UX research is pivotal to designing a useful data product, the different types of customer research, and how many users you need to talk to to get useful info.

    In our chat, we covered:

    • How UX research techniques can help increase adoption of data products. (1:12)
    • Conducting 'upfront research': Why talking to end users and stakeholders early on is crucial to designing a more valuable data product. (8:17)
    • 'A participatory design process': How data scientists should conduct research with stakeholders before and during the designing of a data product. (14:57)
    • How to determine sample sizes in user experience research -- and when to use qualitative vs. quantitative techniques. (17:52)
    • How end user research and design improvements helped Boston Children's Hospital drastically increase the number of recurring donations. (24:38)
    • How a person's worldview and experiences can shape how they interpret data. (32:38)
    • The value of collecting metrics that reflect the success and usage of a data product. (38:11)
    Quotes from Today’s Episode

    “Teams are constantly putting out dashboards and analytics applications — and now it’s machine learning and AI— and a whole lot of it never gets used because it hits all kinds of human walls in the deployment part.” - Brian (3:39)

    “Dare to be simple. It’s important to understand giving [people exactly what they] want, and nothing more. That’s largely a reflection of organizational maturity; making those tough decisions and not throwing out every single possible feature [and] function that somebody might want at some point.” - Bill (7:50)

    “As researchers, we need to more deeply understand the user needs and see what we’re not observing in the lab [and what] we can’t see through our analytics. There’s so much more out there that we can be doing to help move the experience forward and improve that in a substantial way.” - Bill (10:15)

    “You need to do the upfront research; you need to talk to stakeholders and the end users as early as possible. And we’ve known about this for decades, that you will get way more value and come up with a better design, better product, the earlier you talk to people.” - Bill (13:25)

    “Our research methods don’t change because what we’re trying to understand is technology-agnostic. It doesn’t matter whether it’s a toaster or a mobile phone — the questions that we’re trying to understand of how people are using this, how can we make this a better experience, those are constant.” - Bill (30:11)

    “I think, what’s called model interpretability sometimes or explainable AI, I am seeing a change in the market in terms of more focus on explainability, less on model accuracy at all costs, which often likes to use advanced techniques like deep learning, which are essentially black box techniques right now. And the cost associated with black box is, ‘I don’t know how you came up with this and I’m really leery to trust it.’” - Brian (31:56)

    Resources and Links:
    • Bentley University User Experience Center: https://www.bentley.edu/centers/user-experience-center
    • Measuring the User Experience: https://www.amazon.com/Measuring-User-Experience-Interactive-Technologies/dp/0124157815
    • www.bentley.edu/uxc: https://www.bentley.edu/uxc
    • LinkedIn: https://www.linkedin.com/in/walbert/

    070 - Fighting Fire with ML, the AI Incident Database, and Why Design Matters in AI-Driven Software with Sean McGregor Jul 27, 2021

    As much as AI has the ability to change the world in very positive ways, it also can be incredibly destructive. Sean McGregor knows this well, as he is currently developing the Partnership on AI’s AI Incident Database, a searchable collection of news articles that covers questionable use, failures, and other incidents that affect people when AI solutions are poorly designed.

    On this episode of Experiencing Data, Sean takes us through his notable work around using machine learning in the domain of fire suppression, and how human-centered design is critical to ensuring these decision support solutions are actually used and trusted by the users. We also covered the social implications of new decision-making tools leveraging AI, and:

    • Sean's focus on ensuring his models and interfaces were interpretable by users when designing his fire-suppression system and why this was important. (0:51)
    • How Sean built his fire suppression model so that different stakeholders can optimize the system for their unique purposes. (8:44)
    • The social implications of new decision-making tools. (11:17)
    • Tailoring to the needs of 'high-investment' and 'low-investment' people when designing visual analytics. (14:58)
    • The AI Incident Database: Preventing future AI deployment harm by collecting and displaying examples of the unintended and negative consequences of AI. (18:20)
    • How human-centered design could prevent many incidents of harmful AI deployment — and how it could also fall short. (22:13)
    • 'It's worth the time and effort': How taking time to agree on key objectives for a data product with stakeholders can lead to greater adoption. (30:24)
    Quotes from Today’s Episode

    “As soon as you enter into the decision-making space, you’re really tearing at the social fabric in a way that hasn’t been done before. And that’s where analytics and the systems we’re talking about right now are really critical because that is the middle point that we have to meet in and to find those points of compromise.” - Sean (12:28)

    “I think that a lot of times, unfortunately, the assumption [in data science is], ‘Well if you don’t understand it, that’s not my problem. That’s your problem, and you need to learn it.’ But my feeling is, ‘Well, do you want your work to matter or not? Because if no one’s using it, then it effectively doesn’t exist.’” - Brian (17:41)

    “[The AI Incident Database is] a collection of largely news articles [about] bad things that have happened from AI [so we can] try and prevent history from repeating itself, and [understand] more of [the] unintended and bad consequences from AI....” - Sean (19:44)

    “Human-centered design will prevent a great many of the incidents [of AI deployment harm] that have and are being ingested in the database. It’s not a hundred percent thing. Even in human-centered design, there’s going to be an absence of imagination, or at least an inadequacy of imagination for how these things go wrong because intelligent systems — as they are currently constituted — are just tremendously bad at the open-world, open-set problem.” - Sean (22:21)

    “It’s worth the time and effort to work with the people that are going to be the proponents of the system in the organization — the ones that assure adoption — to kind of move them through the wireframes and examples and things that at the end of the engineering effort you believe are going to be possible. … Sometimes you have to know the nature of the data and what inferences can be delivered on the basis of it, but really not jumping into the principal engineering effort until you adopt and agree to what the target is. [This] is incredibly important and very often overlooked.” - Sean (31:36)

    “The things that we’re working on in these technological spaces are incredibly impactful, and you are incredibly powerful in the way that you’re influencing the world in a way that has never, on an individual basis, been so true. And please take that responsibility seriously and make the world a better place through your efforts in the development of these systems. This is right at the crucible for that whole process.” - Sean (33:09)

    Links Referenced
    • seanbmcgregor.com: https://seanbmcgregor.com
    • AI Incident Database: https://incidentdatabase.ai
    • Partnership on AI: https://www.partnershiponai.org

    Twitter: https://twitter.com/seanmcgregor


    069 - The Role of Creativity and Product Thinking in Data Monetization with ‘Infonomics’ Author Doug Laney Jul 13, 2021

    Doug Laney is the preeminent expert in the field of infonomics — and it’s not just because he literally wrote the book on it.

    As the Data & Analytics Strategy Innovation Fellow at consulting firm West Monroe, Doug helps businesses use infonomics to measure the economic value of their data and monetize it. He also is a visiting professor at the University of Illinois at Urbana-Champaign where he teaches classes on analytics and infonomics.

    On this episode of Experiencing Data, Doug and I talk about his book Infonomics, the many different ways that businesses can monetize data, the role of creativity and product management in producing innovative data products, and the ever-evolving role of the Chief Data Officer.

    In our chat, we covered:

    • Why Doug's book Infonomics argues that measuring data for its value potential is key to effectively managing and monetizing it. (2:21)
    • A 'regenerative asset': Innovative methods for deploying and monetizing data — and the differences between direct, indirect, and inverted data monetization. (5:10)
    • The responsibilities of a Chief Data Officer (CDO) — and how taking a product management approach to data can generate additional value. (13:28)
    • Why Doug believes that a 'lack of vision and leadership' is partly behind organizational hesitancy of data monetization efforts. (17:10)
    • ‘A pretty unique skill’: The importance of bringing in people with experience creating and marketing data products when monetizing data. (19:10)
    • Insurance and torrenting: Creative ways companies have leveraged their data to generate additional value. (24:27)
    • Ethical data monetization: Why Doug believes consumers must receive a benefit when organizations leverage their data for profit. (27:14)
    • The data monetization workshops Doug runs for businesses looking to generate new value streams from its data. (29:42)
    Quotes from Today’s Episode

    “Many organizations [endure] a vicious cycle of not measuring [their data], and therefore not managing, and therefore not monetizing their data as well as they can. The idea behind my book Infonomics is, flip that. I’ll just start with measuring your data, understanding what you have, its quality characteristics, and its value potential. But vision is important as well, and so that’s where we start with monetization, and thinking more broadly about the ways to generate measurable economic benefits from data.” - Doug (4:13)

    “A lot of people will compare data to oil and say that ‘Data is the new oil.’ But you can only use a drop of oil one way at a time. When you consume a drop of oil, it creates heat and energy and pollution, and when you use a drop of oil, it doesn’t generate more oil. Data is very different. It has unique economic qualities that economists would call a non-rivalrous, non-depleting, and regenerative asset.” - Doug (7:52)

    “The Chief Data Officer (CDO) role has come on strong in organizations that really want to manage their data as an actual asset, ensure that it is accounted for as generating value and is being managed and controlled effectively. Most CDOs play both offense and defense in controlling and governing data on one side and in enabling it on the other side to drive more business value.”- Doug (14:17)

    “The more successful teams that I read about and I see tend to be of a mixed skill set, they’re cross-functional; there’s a space for creativity and learning, there’s a concept of experimentation that’s happening there.” - Brian (19:10)

    “Companies that become more data-driven have a market-to-book value that’s nearly two times higher than the market average. And companies that make the bulk of their revenue by selling data products or derivative data have a market-to-book value that’s nearly three times the market average. So, there's a really compelling reason to do this. It’s just that not a lot of executives are really comfortable with it. Data continues to be something that’s really amorphous and they don’t really have their heads around.” - Doug (21:38)

    “There’s got to be a benefit to the consumer in the way that you use their data. And that benefit has to be clear, and defined, and ideally measured for them, that we’re able to reduce the price of this product that you use because we’re able to share your data, even if it’s anonymously; this reduces the price of your product.” - Doug (28:24)

    Links referenced
    • Infonomics: https://www.amazon.com/Infonomics-Monetize-Information-Competitive-Advantage/dp/1138090387
    • Email: dlaney@westmonroe.com
    • LinkedIn: https://www.linkedin.com/in/douglaney/
    • Westmonroe.com: https://westmonroe.com
    • Coursera: https://www.coursera.org/instructor/dblaney

    068 - Why User Adoption of Enterprise Data Products Continues to Lag with International Institute for Analytics Executive VP Drew Smith Jun 29, 2021

    Drew Smith knows how much value data analytics can add to a business when done right.

    Having worked at the IKEA Group for 17 years, Drew helped the company become more data-driven, implementing successful strategies for data analytics and governance across multiple areas of the company.

    Now, Drew serves as the Executive Vice President of the Analytics Leadership Consortium at the International Institute for Analytics, where he helps Fortune 1000 companies successfully leverage analytics and data science.

    On this episode of Experiencing Data, Drew and I talk a lot about the factors contributing to low adoption rates of data products, how product and design-thinking approaches can help, and the value of proper one-on-one research with customers.

    In our chat, we covered:

    • 'It’s bad and getting worse': Drew's take on the factors behind low adoption of data products. (1:08)
    • Decentralizing data analytics: How understanding a user's business problems by including them in the design process can lead to increased adoption of data products. (6:22)
    • The importance for business leaders to have a conceptual understanding of the algorithms used in decision-making data products. (14:43)
    • Why data analysts need to focus more on providing business value with the models they create. (18:14)
    • Looking for restless curiosity in new hires for data teams — and the importance of nurturing new learning through training. (22:19)
    • The value of spending one-on-one time with end-users to research their decision-making process before creating a data product. (27:00)
    • User-informed data products: The benefits of design and product-thinking approaches when creating data analytics solutions. (33:04)
    • How Drew's view of data analytics has changed over 15 years in the field . (45:34)
    Quotes from Today’s Episode

    “I think as we [decentralize analytics back to functional areas] — as firms keep all of the good parts of centralizing, and pitch out the stuff that doesn’t work — I think we’ll start to see some changes [when it comes to the adoption of data products.]” - Drew (10:07)

    “I think data people need to accept that the outcome is not the model — the outcome is a business performance which is measurable, material, and worth the change.” - Drew (21:52)

    “We talk about the concept of outcomes over outputs a lot on this podcast, and it’s really about understanding what is the downstream [effect] that emerges from the thing I made. Nobody really wants the thing you made; they just want the result of the thing you made. We have to explore what that is earlier in the process — and asking, “Why?” is very important.” - Brian (22:21)

    “I have often said that my favorite people in the room, wherever I am, aren’t the smartest, it’s the most curious.” - Drew (23:55)

    “For engineers and people that make things, it’s a lot more fun to make stuff that gets used. Just at the simplest level, the fact that someone cared and it didn’t just get shelved, and especially when you spent half your year on this thing, and your performance review is tied to it, it’s just more enjoyable to work on it when someone’s happy with the outcome.” - Brian (33:04)

    “Product thinking starts with the assumption that ‘this is a good product,’ it’s usable and it’s making our business better, but it’s not finished. It’s a continuous loop. It’s feeding back in data through its exhaust. The user is using it — maybe even in ways I didn’t imagine — and those ways are better than I imagined, or worse than I imagined, or different than I imagined, but they inform the product.” - Drew (36:35)

    Links Referenced
    • Email: dsmith@iiaanalytics.com
    • Company site: https://iiaanalytics.com
    • LinkedIn: https://www.linkedin.com/in/andrewjsmithknownasdrew/

    Analytics Leadership Consortium: https://iianalytics.com/services/analytics-leadership-consortium


    067 - Why Roche Diagnostics’ BI and Data Science Teams Are Adopting Human-Centered Design and UX featuring Omar Khawaja Jun 15, 2021

    On today’s episode of Experiencing Data, I’m so excited to have Omar Khawaja on to talk about how his team is integrating user-centered design into data science, BI and analytics at Roche Diagnostics.

    In this episode, Omar and I have a great discussion about techniques for creating more user-centered data products that produce value — as well as how taking such an approach can lead to needed change management on how data is used and interpreted.

    In our chat, we covered:

    • What Omar is responsible for in his role as Head of BI & Analytics at Roche Diagnostics — and why a human-centered design approach to data analytics is important to him. (0:57)
    • Understanding the end-user's needs: Techniques for creating more user-centric products — and the challenges of taking on such an approach. (6:10)
    • Dissecting 'data culture': Why Omar believes greater implementation of data-driven decision-making begins with IT 'demonstrating' the approach's benefits. (9:31)
    • Understanding user personas: How Roche is delivering better outcomes for medical patients by bringing analytical insights to life. (15:19)
    • How human-centered design yields early 'actionable insights' that can lead to needed change management on how data is used and interpreted. (22:12)
    • The journey of learning: Why 'it's everybody's job' to be focused on user experience — and how field research can help determine an end-users needs. (27:26)
    • Omar's love of cricket and the statistics collected about the sport! (31:23)
    Resources and Links:
    • Roche Diagnostics: https://www.roche.com/
    • LinkedIn: https://www.linkedin.com/in/kmaomar/
    • Twitter: https://twitter.com/kmaomar
    Quotes from Today’s Episode

    “I’ve been in the area of data and analytics since two decades ago, and out of my own learning — and I’ve learned it the hard way — at the end of the day, whether we are doing these projects or products, they have to be used by the people. The human factor naturally comes in.” - Omar (2:27)

    “Especially when we’re talking about enterprise software, and some of these more complex solutions, we don’t really want people noticing the design to begin with. We just want it to feel valuable, and intuitive, and useful right out of the box, right from the start.” - Brian (4:08)

    “When we are doing interviews with [end-users] as part of the whole user experience [process], you learn to understand what’s being said in between the lines, and then you learn how to ask the right questions. Those exploratory questions really help you understand: What is the real need?” - Omar (8:46)

    “People are talking about data-driven [cultures], data-informed [cultures] — but at the end of the day, it has to start by demonstrating what change we want. ... Can we practice what we are trying to preach? Am I demonstrating that with my team when I’m making decisions in my day-to-day life? How do I use the data? IT is very good at asking our business colleagues and sometimes fellow IT colleagues to use various enterprise IT and business tools. Are we using, ourselves, those tools nicely?” - Omar (11:33)

    “We focus a lot on what’s technically possible, but to me, there’s often a gap between the human need and what the data can actually support. And the bigger that gap is, the less chance things get used. The more we can try to close that gap when we get into the implementation stage, the more successful we probably will be with getting people to care and to actually use these solutions.” - Brian (22:20)

    “When we are working in the area of data and analytics, I think it’s super important to know how this data and insights will be used — which requires an element of putting yourself in the user’s shoes. In the case of an enterprise setup, it’s important for me to understand the end-user in different roles and personas: What they are doing and how their job is. [This involves] sitting with them, visiting them, visiting the labs, visiting the factory floors, sitting with the finance team, and learning what they do in the system. These are the places where you have your learning.” - Omar (29:09)


    066 - How Alison Magyari Used Design Thinking to Transform Eaton’s Business Analytics Approach to Creating Data Products Jun 01, 2021

    Earlier this year, the always informative Women in Analytics Conference took place online. I didn’t go — but a blog post about one of the conference’s presentations on the International Institute of Analytics’ website caught my attention.

    The post highlighted key points from a talk called Design Thinking in Analytics that was given at the conference by Alison Magyari, an IT Manager at Eaton. In her presentation, Alison explains the four design steps she utilizes when starting a new project — as well as what “design thinking” means to her.

    Human-centered design is one of the main themes of Experiencing Data, so given Alison’s talk about tapping into the emotional state of customers to create better designed data products, I knew she would be a great guest. In this episode, Alison and I have a great discussion about building a “design thinking mindset” — as well as the importance of keeping the design process flexible.

    In our chat, we covered:

    • How Alison employs design thinking in her role at Eaton to better understand the 'voice of the customer.' (0:28)
    • Same frustrations, no excitement, little use: The factors that led to Alison's pursuit of a design thinking mindset when building data products at Eaton. (3:35)
    • Alleviating the 'pain points' with design thinking: The importance of understanding how a data tool makes users feel. (10:24)
    • How Eaton's business analysts (and end users) take ownership of the design process — and the challenges Alison faced in building a team of business analysts committed to design thinking. (15:51)
    • 'It's not one size fits all': The benefits of keeping the design process flexible — and why curiosity and empathy are traits of successful designers. (21:06)
    • 'Pay me now or pay me later': How Alison dealt with pushback to spending more time and resources on design — and how she dealt with skepticism from business users. (24:09)
    Resources and Links:
    • Blog post on International Institute for Analytics: https://www.iianalytics.com/blog/2021/2/25/utilizing-human-centered-design-to-inform-products-and-reach-communities
    • Eaton: https://www.eaton.com/
    • LinkedIn: https://www.linkedin.com/in/alisonmagyari/
    • Email: alisonmagyari@eaton.com
    Quotes from Today’s Episode

    “In IT, it’s really interesting how sometimes we get caught up in just looking at the technology for what it is, and we forget that the technology is there to serve our business partners.” - Alison (2:00)

    “You can give people exactly what they asked for, but if you’re designing solutions and data-driven products with someone, and if they’re really for somebody else, you actually have to dig in to figure out the unarticulated needs. TAnd they may not know how to invite you in to do ask for that. They may not even know how they’re going to make a decision with data about something. So, you can say “sorry, ... You could say, “Well, you’re not prepared to talk to us yet,.” oOr, you can be part of helping them work it out. ‘decide,H how will you make a decision with this information? Let us be part of that problem-finding exercise with you, not just the solution part. Because you can fail if you just give people what they asked for, so it’s best to be part of the problem finding not just solving.” - Brian (8:42)

    “During our design process, we noted down what the sentiment of our users was while they were using our data product. … Our users so appreciated when we would mirror back to them our observations about what they were feeling, and we were right about it. I mean, they were much more open to talking to us. They were much more open and they shared exactly what they were feeling.” - Alison (12:51)

    “In our case, we did have the business analyst team really own the design process. Towards the end, we were the champions for it, but then our business users really took ownership, which I was proud of. They realized that if they didn’t embrace this, that they were going to have to deal with the same pain points for years to come. They didn’t want to deal with that, so they were really good partners in taking ownership at the end of the day.” - Alison (16:56)

    “The way you learn how to do design is by doing it. … the second thing is that you don’t have to do, “All of it,” to get some value out of it. You could just do prototyping, you could do usability evaluation, you could do ‘what if’ analyses. You can do a little of one thing and probably get some value out of that fairly early, and it’s fairly safe. And then over time, you can learn other techniques. Eventually, you will have a library of techniques that you can apply. It’s a mindset, it’s really about changing the mind. It’s heads not hands, as I sometimes say: It’s not really about hands. It’s about how we think and approach problem-solving.” - Brian (20:16)

    “I think everybody can do design, but I think the ones that have been incredibly successful at it have a natural curiosity. They don’t just stop with the first answer that they get. They want to know, “If I were doing this job, would I be satisfied with compiling a 50 column spreadsheet every single day in my life? Probably not. Its curiosity and empathy — if you have those traits, naturally, then design is just kind of a better fit.” - Alison (23:15)


    065 - Balancing Human Intuition and Machine Intelligence with Salesforce Director of Product Management Pavan Tumu May 18, 2021

    I once saw a discussion on LinkedIn about a fraud detection model that had been built but never used. The model worked — it was expensive — but it just simply didn’t get used because the humans in the loop were not incentivized to use it.

    It was on this very thread that I first met Salesforce Director of Product Management Pavan Tuvu, who chimed in on the thread about a similar experience he went through. When I heard about his experience, I asked him if he would share it with you and he agreed. So, today on the Experiencing Data podcast, I’m excited to have Pavan on to talk about some lessons he learned while designing ad-spend software that utilized advanced analytics — and the role of the humans in the loop. We discussed:

    • Pavan's role as Director of Product Management at Salesforce and how he works to make data easier to use for teams. (0:40)
    • Pavan's work protecting large-dollar advertising accounts from bad actors by designing a ML system that predicts and caps ad spending. (6:10)
    • 'Human override of the machine': How Pavan addressed concerns that its advertising security system would incorrectly police legitimate large-dollar ad spends. (12:22)
    • How the advertising security model Pavan worked on learned from human feedback. (24:49)
    • How leading with "why" when designing data products will lead to a better understanding of what customers need to solve. (29:05)

    064 - How AI Shapes the Products of Startups in MIT’s “Tough Tech” Venture Fund, The Engine feat. General Partner, Reed Sturtevant May 04, 2021

    Reed Sturtevant sees a lot of untapped potential in “tough tech.”

    As a General Partner at The Engine, a venture capital firm launched by MIT, Reed and his colleagues invest in companies with breakthrough technology that, if successful, could positively transform the world.

    It’s been about 15 years since I’ve last caught up to Reed—who was CTO at a startup we worked at together—so I’m so excited to welcome him on this episode of Experiencing Data! Reed and I talked about AI and how some of the portfolio companies in his fund are using data to produce better products, solutions, and inventions to tackle some of the world’s toughest challenges.

    In our chat, we covered:

    • How Reed's venture capital firm, The Engine, is investing in technology driven businesses focused on making positive social impacts. (0:28)
    • The challenges that technical PhDs and postdocs face when transitioning from academia to entrepreneurship. (2:22)
    • Focusing on a greater mission: The importance of self-examining whether an invention would be a good business. (5:16)
    • How one technology business invested in by The Engine, The Routing Company, is leveraging AI and data to optimize public transportation and bridge service gaps. (9:05)
    • Understanding and solving a problem: Using ‘design exercises’ to find successful market fits for existing technological solutions. (16:53)
    • Solutions first, problems second: Why asking the right questions is key to mapping a technological solution back to a problem in the market. (19:31)
    • Understanding and articulating a product’s value to potential buyers. (22:54)
    • How the go-to-market strategies of software companies have changed over the last few decades. (26:16)
    Resources and Links:
    • The Engine: https://www.engine.xyz/
    Quotes from Today’s Episode

    There have been a couple of times while working at The Engine when I’ve taken it as a sign of maturity when a team self-examines whether their invention is actually the right way to build a business. - Reed (5:59)

    For some of the data scientists I know, particularly with AI, executive teams can mandate AI without really understanding the problem they want to solve. It actually pushes the problem discovery onto the solution people — but they’re not always the ones trained to go find the problems. - Brian (19:42)

    You can keep hitting people over the head with a product, or you can go figure out what people care about and determine how you can slide your solution into something they care about. ... You don’t know that until you go out and talk to them,listen, and and get in to their world. And I think that’s still something that’s not happening a lot with data teams. - Brian (24:45)

    I think there really is a maturity among even the early stage teams now, where they can have a shelf full of techniques that they can just pick and choose from in terms of how to build a product, how to put it in front of people, and how to have the [user] experience be a gentle on-ramp. - Reed, on startups (27:29)


    063 - Beyond Compliance: Designing Data Products With Data Privacy As a UX Benefit with The Data Diva (Debbie Reynolds) Apr 20, 2021

    Debbie Reynolds is known as “The Data Diva” — and for good reason.

    In addition to being founder, CEO and chief data privacy officer of her own successful consulting firm, Debbie has been named to the Global Top 20 CyberRisk Communicators by The European Risk Policy Institute in 2020. She’s also written a few books, such as The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change; as well as articles for other publications.

    If you are building data products, especially customer-facing software, you’ll want to tune into this episode. Debbie and Ihad an awesome discussion about data privacy from the lens of user experience instead of the typical angle we are all used to: legal compliance. While collecting user data can enable better user experiences, we can also break a customer’s trust if we don’t request access properly.

    In our chat, we covered:

    • 'Humans are using your product': What it means to be a 'data steward' when building software. (0:27)
    • 'Privacy by design': The importance for software creators to think about privacy throughout the entire product creation process. (4:32)
    • The different laws (and lack thereof) regarding data privacy — and the importance to think about a product's potential harm during the design process. (6:58)
    • The importance of having 'diversity at all levels' when building data products. (16:41)
    • The role of transparency in data collection. (19:41)
    • Fostering a positive and collaborative relationship between a product or service’s designers, product owners, and legal compliance experts. (24:55)
    • The future of data monetization and how it relates to privacy. (29:18)
    Resources and Links:
    • Debbie’s Website.
    • Twitter: @DebbieDataDiva
    • Debbie’s LinkedIn
    Quotes from Today’s Episode

    When it comes to your product, humans are using it. Regardless of whether the users are internal or external — what I tell people is to put themselves in the shoes of someone who’s using this and think about what you would want to have done with your information or with your rights. Putting it in that context, I think, helps people think and get out of their head about it. Obviously there’s a lot of skill and a lot of experience that it takes to build these products and think about them in technical ways. But I also try to tell people that when you’re dealing with data and you’re building products, you’re a data steward. The data belongs to someone else, and you’re holding it for them, or you’re allowing them to either have access to it or leverage it in some way. So, think about yourself and what you would think you would want done with your information. - Debbie (3:28)

    Privacy by design is looking at the fundamental levels of how people are creating things, and having them think about privacy as they’re doing that creation. When that happens, then privacy is not a difficult thing at the end. Privacy really isn’t something you could tack on at the end of something; it’s something that becomes harder if it’s not baked in. So, being able to think about those things throughout the process makes it easier. We’re seeing situations now where consumers are starting to vote with their feet — if they feel like a tool or a process isn’t respecting their privacy rights, they want to be able to choose other things. So, I think that’s just the way of the world. .... It may be a situation where you’re going to lose customers or market share if you’re not thinking about the rights of individuals. - Debbie (5:20)

    I think diversity at all levels is important when it comes to data privacy, such as diversity in skill sets, points of view, and regional differences. … I think people in the EU — because privacy is a fundamental human right — feel about it differently than we do here in the US where our privacy rights don’t really kick in unless it’s a transaction. ... The parallel I say is that people in Europe feel about privacy like we feel about freedom of speech here — it’s just very deeply ingrained in the way that they do things. And a lot of the time, when we’re building products, we don’t want to be collecting data or doing something in ways that would harm the way people feel about your product. So, you definitely have to be respectful of those different kinds of regimes and the way they handle data. … I’ll give you a biased example that someone had showed me, which was really interesting. There was a soap dispenser that was created where you put your hand underneath and then the soap comes out. It’s supposed to be a motion detection thing. And this particular one would not work on people of color. I guess whatever sensor they created, it didn’t have that color in the spectrum of what they thought would be used for detection or whatever. And so those are problems that happen a lot if you don’t have diverse people looking at these products. Because you — as a person that is creating products — you really want the most people possible to be able to use your products. I think there is an imperative on the economic side to make sure these products can work for everyone. - Debbie (17:31)

    Transparency is the wave of the future, I think, because so many privacy laws have it. Almost any privacy law you think of has transparency in it, some way, shape, or form. So, if you’re not trying to be transparent with the people that you’re dealing with, or potential customers, you’re going to end up in trouble. - Debbie (24:35)

    In my experience, while I worked with lawyers in the digital product design space — and it was heaviest when I worked at a financial institution — I watched how the legal and risk department basically crippled stuff constantly. And I say “cripple” because the feeling that I got was there’s a line between adhering to the law and then also—some of this is a gray area, like disclosure. Or, if we show this chart that has this information, is that construed as advice? I understand there’s a lot of legal regulation there. My feeling was, there’s got to be a better way for compliance departments and lawyers that genuinely want to do the right thing in their work to understand how to work with product design, digital design teams, especially ones using data in interesting ways. How do you work with compliance and legal when we’re designing digital products that use data so that it’s a team effort, and it’s not just like, “I’m going to cover every last edge because that’s what I’m here to do is to stop anything that could potentially get us sued.” There is a cost to that. There’s an innovation cost to that. It’s easier, though, to look at the lawyer and say, “Well, I guess they know the law better, so they’re always going to win that argument.” I think there’s a potential risk there. - Brain (25:01)

    Trust is so important. A lot of times in our space, we think about it with machine learning, and AI, and trusting the model predictions and all this kind of stuff, but trust is a brand attribute as well and it’s part of the reason I think design is important because the designers tend to be the most empathetic and user-centered of the bunch. That’s what we’re often there to do is to keep that part in check because we can do almost anything these days with the tech and the data, and some of it’s like, “Should we do this?” And if we do do it, how do we do it so we’re on brand, and the trust is built, and all these other factors go into that user experience. - Brian (34:21)


    062 - Why Ben Shneiderman is Writing a Book on the Importance of Designing Human-Centered AI Apr 06, 2021

    Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI).

    Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.

    I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.

    In our chat, we covered:

    • Ben's career studying human-computer interaction and computer science. (0:30)
    • 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55)
    • 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)
    • 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16)
    • A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)
    • Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)
    • Ben's upcoming book on human-centered AI. (35:55)
    Resources and Links:
    • People-Centered Internet: https://peoplecentered.net/
    • Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X
    • Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764
    • Partnership on AI: https://www.partnershiponai.org/
    • AI incident database: https://www.partnershiponai.org/aiincidentdatabase/
    • University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/
    • ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html
    • Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/
    • Ben on Twitter: https://twitter.com/benbendc
    Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)

    The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)

    Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)

    There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)

    Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

    Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)


    061 - Applying a Product Mindset to Internal Data Products with Silicon Valley Product Group Partner Marty Cagan Mar 23, 2021

    Marty Cagan has had a storied career working as a product executive. With a resume that includes Vice President of Product at Netscape and Ebay, Marty is an expert in product management and strategy.

    This week, Marty joins me on Experiencing Data to talk more about what a successful data product team looks like, as well as the characteristics of an effective product manager. We also explored the idea of product management applied to internal data teams. Marty and I didn’t necessarily agree on everything in this conversation, but I loved his relentless focus on companies’ customers. Marty and I also talked a bit about his new book, Empowered: Ordinary People, Extraordinary Teams. I also spoke with Marty about:

    • The responsibilities of a data product team. (0:59)
    • Whether an internally-facing software solution can be considered a 'product.' (5:02)
    • Customer-facing vs. customer-enabling: Why Marty tries hard not to confuse the terminology of internal employees as customers. (7:50)
    • The common personality characteristics and skill sets of effective product managers. (12:53)
    • The importance of 'customer exposure time.' (17:56)
    • The role of product managers in upholding ethical standards. (24:57)
    • The value of a good designer on a product team. (28:07)
    • Why Marty decided to write his latest book, Empowered, about leadership. (30:52)
    Quotes from Today’s Episode

    We try hard not to confuse customers with internal employees — for example, a sales organization, or customer service organization. They are important partners, but when a company starts to confuse these internal organizations with real customers, all kinds of bad things happen — especially to the real customer. [...] A lot of data reporting teams are, in most companies, being crushed with requests. So, how do you decide what to prioritize? Well, a product strategy should help with that and leadership should help with that. But, fundamentally, the actual true customers are going to drive a lot of what we need to do. It’s important that we keep that in mind. - Marty (9:13)

    I come out of the technology space, and, for me, the worlds of product design and product management are two overlapping circles. Some people fall in the middle, some people are a little bit heavier to one side or the other. The focus there is there’s a lot of focus on empathy, and a focus on understanding how to frame the problem correctly — it’s about not jumping to a solution immediately without really understanding the customer pain point. - Brian (10:47)

    One thing I’ve seen frequently throughout my career is that designers often have no idea how the business sustains itself. They don’t understand how it makes money, they don’t understand how it’s even sold or marketed. They are relentlessly focused on user experience, but the other half of it is making a business viable. - Brian (14:57)

    Ethical issues really do, in almost all cases I see, originate with the leaders. However, it’s also true that they can first manifest themselves in the product teams. The product manager is often the first one to see that this could be a problem, even when it’s totally unintentional. - Marty (26:45)

    My interest has always been product teams because every good product I know came from a product team. Literally — it is a combination of product design and engineering that generate great products. I’m interested in the nature of that collaboration and in nurturing the dynamics of a healthy team. To me, having strong engineering that’s all engaged with direct customer access is fundamental. Similarly, a professional designer is important — somebody that really understands service design, interaction design, visual design, and the user research behind it. The designer role is responsible for getting inside the heads of the users. This is hard. And it’s one of those things, when it’s done well, nobody even notices it. - Marty (28:54)

    Links Referenced
    • Silicon Valley Product Group: https://svpg.com/
    • Empowered: https://svpg.com/empowered-ordinary-people-extraordinary-products/
    • Inspired: https://svpg.com/inspired-how-to-create-products-customers-love/
    • Twitter: https://twitter.com/cagan

    LinkedIn: https://www.linkedin.com/in/cagan/


    060 - How NPR Uses Data to Drive Editorial Decisions in the Newsroom with Sr. Dir. of Audience Insights Steve Mulder Mar 08, 2021

    Journalism is one of the keystones of American democracy. For centuries, reporters and editors have kept those in power accountable by seeking out the truth and reporting it.

    However, the art of newsgathering has changed dramatically in the digital age. Just take it from NPR Senior Director of Audience Insights Steve Mulder — whose team is helping change the way NPR makes editorial decisions by introducing a streamlined and accessible platform for data analytics and insights.

    Steve and I go way, way back (Lycos anyone!?) — and I’m so excited to welcome him on this episode of Experiencing Data! We talked a lot about the Story Analytics and Data Insights (SANDI) dashboard for NPR content creators that Steve’s team just recently launched, and dove into:

    • How Steve’s design and UX background influences his approach to building analytical tools and insights (1:04)
    • Why data teams at NPR embrace qualitative UX research when building analytics and insights solutions for the editorial team. (6:03)
    • What the Story Analytics and Data Insights (SANDI) dashboard for NPR’s newsroom is, the goals it is supporting, and the data silos that had to be broken down (10:52)
    • How the NPR newsroom uses SANDI to measure audience reach and engagement. (14:40)
    • 'It's our job to be translators': The role of moving from ‘what’ to ‘so what’ to ‘now what’ (22:57)
    Quotes from Today’s Episode

    People with backgrounds in UX and design end up everywhere. And I think it's because we have a couple of things going for us. We are user-centered in our hearts. Our goal is to understand people and what they need — regardless of what space we're talking about. We are grounded in research and getting to the underlying motivations of people and what they need. We're focused on good communication and interpretation and putting knowledge into action — we're generalists. - Steve (1:44)

    The familiar trope is that quantitative research tells you what is going on, and qualitative research tells you why. Qualitative research gets underneath the surface to answer why people feel the way they do. Why are they motivated? Why are they describing their needs in a certain way? - Steve (6:32)

    The more we work with people and develop relationships — and build that deeper sense of trust as an organization with each other — the more openness there is to having a real conversation. - Steve (9:06)

    I’ve been reading a book by Nancy Duarte called DataStory (see Episode 32 of this show), and in the book she talks about this model of the career growth [...]that is really in sync with how I've been thinking about it. [...]you begin as an explorer of data — you're swimming in the data and finding insights from the data-first perspective. Over time in your career, you become an explainer. And an explainer is all about creating meaning: what is the context and interpretation that I can bring to this insight that makes it important, that answers the question, “So what?” And then the final step is to inspire, to actually inspire action and inspire new ways of looking at business problems or whatever you're looking at. - Steve (25:50)

    I think that carving things down to what's the simplest is always a big challenge, just because those of us drowning in data are always tempted to expose more of it than we should. - Steve (29:30)

    There's a healthy skepticism in some parts of NPR around data and around the fact that ‘I don't want data to limit what I do with my job. I don't want it to tell me what to do.’ We spend a lot of time reassuring people that data is never going to make decisions for you — it's just the foundation that you can stand on to better make your own decision. … We don't use data-driven decisions. At NPR, we talk about data-??? decisions because that better reflects the fact that it is data and expertise together that make things magic. - Steve (34:34)

    Resources and Links:
    • Twitter: https://twitter.com/muldermedia

    059 - How Design Thinking Helps Organizations and Data Science Teams Create Economic Value with Machine Learning and Analytics feat. Bill Schmarzo Feb 23, 2021

    With a 30+ year career in data warehousing, BI and advanced analytics under his belt, Bill has become a leader in the field of big data and data science – and, not to mention, a popular social media influencer. Having previously worked in senior leadership at DellEMC and Yahoo!, Bill is now an executive fellow and professor at the University of San Francisco School of Management as well as an honorary professor at the National University of Ireland-Galway.

    I’m so excited to welcome Bill as my guest on this week’s episode of Experiencing Data. When I first began specializing my consulting in the area of data products, Bill was one of the first leaders that I specifically noticed was leveraging design thinking on a regular basis in his work. In this long overdue episode, we dug into some examples of how he’s using it with teams today. Bill sees design as a process of empowering humans to collaborate with one another, and he also shares insights from his new book, “The? Economics of Data, Analytics and Digital Transformation.”

    In total, we covered:

    • Why it’s crucial to understand a customer’s needs when building a data product and how design helps uncover this. (2:04)
    • How running an “envisioning workshop” with a customer before starting a project can help uncover important information that might otherwise be overlooked. (5:09)
    • How to approach the human/machine interaction when using machine learning and AI to guide customers in making decisions – and why it’s necessary at times to allow a human to override the software. (11:15)
    • How teams that embrace design-thinking can create “organizational improvisation” and drive greater value. (14:49)
    • Bill take on how to properly prioritize use cases (17:40)
    • How toidentify a data product’s problems ahead of time. (21:36)
    • The trait that Bill sees in the best data scientists and design thinkers (25:41)
    • How Bill helps transition the practice of data science from being a focus on analytic outputs to operational and business outcomes. (28:40)
    • Bill’s new book, “The Economics of Data, Analytics, and Digital Transformation.” (31:34)
    • Brian and Bill’s take on the need for organizations to create a technological and cultural environment of continuous learning and adapting if they seek to innovate. (38:22)
    Quotes from Today’s Episode

    There’s certainly a UI aspect of design, which is to build products that are more conducive for the user to interact with – products that are more natural, more intuitive … But I also think about design from an empowerment perspective. When I consider design-thinking techniques, I think about how I can empower the wide variety of stakeholders that I need to service with my data science. I’m looking to identify and uncover those variables and metrics that might be better predictors of performance. To me, at the very beginning of the design process, it’s about empowering everybody to have ideas. – Bill (2:25)

    Envisioning workshops are designed to let people realize that there are people all across the organization who bring very different perspectives to a problem. When you combine those perspectives, you have an illuminating thing. Now let’s be honest: many large organizations don’t do this well at all. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking isn’t empowering the senior executives. In many cases, it’s about empowering those frontline employees … If you have a culture where the senior executives have to be the smartest people in the room, design is doomed. – Bill (10:15)

    Organizational charts are the great destroyer of creativity because you put people in boxes. We talk about data silos, but we create these human silos where people can’t go out … Screw boxes. We want to create swirls – we want to create empowered teams. In fact, the most powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. Meaning, you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play … These players have been trained and conditioned, they make their own decisions on the field, and they interact with each other. Watching a good soccer team is like ballet because they’ve all been empowered to make decisions. – Bill (15:30)

    I tend to feel like design thinkers can be born from any job title, not just “creatives” – even certain types of verytechnically gifted people can be really good at it. A lot of it is focused around the types of questions they ask and their ability to be empathetic. – Brian (25:55)

    (Is there another quote from me? So many good ones in this episode from Bill though so if not, i understand)

    The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer…and here’s the beauty of design thinking: anybody can do it. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. – Bill (26:34) From an economic perspective … The value of data isn’t in having it. The value in data is how you use it to generate more value … In the same way that design thinking is learning how to speak the language of the customer, economics is about learning how to speak the language of the business. And when you bring those concepts together around data science, that’s a blend that is truly a game-changer. – Bill (36:03)

    Links

    058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications Feb 09, 2021

    On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications.

    Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application.

    The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight.

    So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences.

    In total, I covered:

    • Why it’s important to consider that a monitoring application user experiences may happen across multiple screens, interfaces, departments or people. (2:32)
    • Design considerations benefits when building a forecasting or predictive application that allows customers to change parameters and explore “what-if” scenarios. (6:09)
    • Designing for seasonality: What it means to have a monitoring application that understands and adapts to periodicity in the real world. (11:03)
    • How the best user experiences for monitoring and maintenance applications using analytics seamlessly integrate people, processes and related technology. (16:03)
    • The role of alerting and notifications in these systems … and where things can go wrong if they aren’t well designed from a UX perspective. (19:49)
    • How to keep the customer (user’s) business top of mind within the application UX. (23:19)
    • One secret to making time-series charts in particular more powerful and valuable to users. (25:24)
    • Some of the common features and use cases I see monitoring applications needing to support on out-of-the-box dashboards. (27:15)
    Quotes from Today’s Episode

    Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58)

    When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15)

    Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03)

    The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00)

    With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18)


    057 - How to Design Successful Enterprise Data Products When You Have Multiple User Types to Satisfy Jan 26, 2021

    Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types who all have different expectations for the service. Whether an application offers a wealth of traditional historical analytics or leverages predictive capabilities using machine learning, for example, you may find that different users have different expectations. As a leader, you may be forced to make choices about how and what data you’ll present, and how you will allow these different user types to interact with it. These choices can be difficult when domain knowledge, time availability, job responsibility, and a need for control vary greatly across these personas. So what should you do?

    To answer that, today I’m going solo on Experiencing Data to highlight some strategies I think about when designing multi-user enterprise data products so that in the end, something truly innovative, useful, and valuable emerges.

    In total, I covered:

    • Why UX research is imperative and the types of research I think are important (4:43)
    • The importance for teams to have a single understanding of how a product’s success will be measured before it is built and launched (and how research helps clarify this). (8:28)
    • The pros and cons of using the design tool called “personas” to help guide design decision making for multiple different user types. (19:44)
    • The idea of ‘Minimum valuable product’ and how you balance this with multiple user types (24:26)
    • The strategy I use to reduce complexity and find opportunities to solve multiple users’ needs with a single solution (29:26)
    • The relevancy of declaratory vs. exploratory analytics and why this is relevant. (32:48)
    • My take on offering customization as a means to satisfy multiple customer types. (35:15)
    • Expectations leaders should have-particularly if you do not have trained product designers or UX professionals on your team. (43:56)
    Resources and Links
    • My training seminar, Designing Human-Centered Data Products: http://designingforanalytics.com/theseminar
    • Designing for Analytics Self-Assessment Guide: http://designingforanalytics.com/guide
    • (Book) The User Is Always Right: A Practical Guide to Creating and Using Personas for the Web by Steve Mulder https://www.amazon.com/User-Always-Right-Practical-Creating/dp/0321434536
    • My C-E-D Design Framework for Integrating Advanced Analytics into Decision Support Software: https://designingforanalytics.com/resources/c-e-d-ux-framework-for-advanced-analytics/
    • Homepage for all of my free resources on designing innovative machine learning and analytics solutions: designingforanalytics.com/resources

    056 - How Design Helps Drive Adoption of Data Products Used for Social Work with Chief Data Officer Dr. Besa Bauta of MercyFirst Jan 12, 2021

    There’s a lot at stake in the decisions that social workers have to make when they care for people — and Dr. Besa Bauta keeps this in mind when her teams are designing the data products that care providers use in the field.

    As Chief Data Officer at MercyFirst, a New York-based social service nonprofit, Besa explains how her teams use design and design thinking to create useful decision support applications that lead to improved clinician-client interactions, health and well-being outcomes, and better decision making.

    In addition to her work at MercyFirst, Besa currently serves as an adjunct assistant professor at New York University’s Silver School of Social Work where she teaches public health, social science theories and mental/behavioral health. On today’s episode, Besa and I talked about how MercyFirst’s focus on user experience improves its delivery of care and the challenges Besa and her team have encountered in driving adoption of new technology.

    In total, we covered:

    • How data digitization is improving the functionality of information technologies. (1:40)
    • Why MercyFirst, a social service organization, partners with technology companies to create useful data products. (3:30)
    • How MercyFirst decides which applications are worth developing. (7:06)
    • Evaluating effectiveness: How MercyFirst’s focus on user experience improves the delivery of care. (10:45)
    • “With anything new, there is always fear”: The challenges MercyFirst has with getting buy-in on new technology from both patients and staff. (15:07)
    • Besa’s take on why it is important to engage the correct stakeholders early on in the design of an application — and why she engages the naysayers. (20:05)
    • The challenges MercyFirst faces with getting its end-users to participate in providing feedback on an application’s design and UX. (24:10)
    • Why Besa believes it is important to be thinking of human-centered design from the inception of a project. (27:50)
    • Why it is imperative to involve key stakeholders in the design process of artificial intelligence and machine learning products. (31:20)
    Quotes from Today’s Episode

    We're not a technology company, ...so, for us, it’s about finding the right partners that understand our use cases and who are also willing to work alongside us to actually develop something that our end-users — our physicians, for example — are able to use in their interaction with a patient. - Besa

    No one wants to have a different type of application every other week, month, or year. We want to have a solution that grows with the organization. - Besa on the importance of creating a product that is sustainable over time

    If we think about data as largely about providing decision support or decision intelligence, how do you measure that it's designed to do a good job? What's the KPI for choosing good KPIs? - Brian

    Earlier on, engaging with the key stakeholders is really important. You're going to have important gatekeepers, who are going to say, ‘No, no, no,’ — the naysayers. I start with the naysayers first — the harder nuts to crack — and say, ‘How can this improve your process or your service?’ If I could win them over, the rest is cake. Well, almost. Not all the time. - Besa

    Failure is how some orgs learn about just how much design matters. At some point, they realize that data science, engineering, and technical work doesn't count if no human will use that app, model, product, or dashboard when it rolls out. -Brian

    Besa: It was a dud. [laugh].

    Brian: —yeah, if it doesn’t get used, it doesn't matter

    What my team has done is create workgroups with our vendors and others to sort of shift developmental timelines [...] and change what needs to go into development and production first—and then ensure there's a tiered approach to meet [everyone’s] needs because we work as a collective. It’s not just one healthcare organization: there are many health and social service organizations on the same boat. - Besa

    It's really important to think about the human in the middle of this entire process. Sometimes products get developed without really thinking, ‘is this going to improve the way I do things? Is it going to improve anything?’ … The more personalized a product is,the better it is and the greater the adoption. - Besa


    055 - What Can Carol Smith’s Ethical AI Work at the DoD Teach Us About Designing Human-Machine Experiences? Dec 29, 2020

    It’s not just science fiction: As AI becomes more complex and prevalent, so do the ethical implications of this new technology.But don’t just take it from me – take it from Carol Smith, a leading voice in the field of UX and AI. Carol is a senior research scientist in human-machine interaction at Carnegie Mellon University’s Emerging Tech Center, a division of the school’s Software Engineering Institute. Formerly a senior researcher for Uber’s self-driving vehicle experience, Carol-who also works as an adjunct professor at the university’s Human-Computer Interaction Institute-does research on Ethical AI in her work with the US Department of Defense.

    Throughout her 20 years in the UX field, Carol has studied how focusing on ethics can improve user experience with AI. On today’s episode, Carol and I talked about exactly that: the intersection of user experience and artificial intelligence, what Carol’s work with the DoD has taught her, and why design matters when using machine learning and automation. Better yet, Carol gives us some specific, actionable guidance and her four principles for designing ethical AI systems.

    In total, we covered:

    • “Human-machine teaming”: what Carol learned while researching how passengers would interact with autonomous cars at Uber (2:17)
    • Why Carol focuses on the ethical implications of the user experience research she is doing (4:20)
    • Why designing for AI is both a new endeavor and an extension of existing human-centered design principles (6:24)
    • How knowing a user’s information needs can drive immense value in AI products (9:14)
    • Carol explains how teams can improve their AI product by considering ethics (11:45)
    • “Thinking through the worst-case scenarios”: Why ethics matters in AI development (14:35) and methods to include ethics early in the process (17:11)
    • The intersection between soldiers and artificial intelligence (19:34)
    • Making AI flexible to human oddities and complexities (25:11)
    • How exactly diverse teams help us design better AI solutions (29:00)
    • Carol’s four principles of designing ethical AI systems and “abusability testing” (32:01)
    Quotes from Today’s Episode

    “The craft of design-particularly for #analytics and #AI solutions-is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format they need it in.” – Brian

    “From a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful … And then beyond that, as you start to think about ethics, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through – what is the worst thing that could happen with the system?” – Carol

    “[For AI, I recommend] ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because it really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing.” – Carol, on ways to think about the ethical implications of an AI system

    “I think people need to be more open to doing slightly slower work […] the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower.” – Carol

    “The four principles of designing ethical AI systems are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide.” – Carol

    “Keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work for a lot of people. They’re just not used to having these types of ethical conversations, but it’s really important that we become more comfortable with them, and keep asking those questions. Because if we’re not asking the questions, no one else may ask them.” – Carol

    Links
    • Designing Ethical AI Experiences (Agreement and Checklist) (PDF)

    054 - Jared Spool on Designing Innovative ML/AI and Analytics User Experiences Dec 15, 2020

    Jared Spool is arguably the most well-known name in the field of design and user experience. For more than a decade, he has beena witty, powerful voice for why UX is critical to value creation within businesses. Formerly an engineer, Jared started working in UX in 1978, founded UIE (User Interface Engineering) in 1988, and has helped establish the field over the last 30 years. In addition, he advised the US Digital Service / Executive Office of President Obama and in 2016, Jared co-founded the Center Centre, the user experience design school that’s creating a new generation of industry-ready UX designers.

    Today however, we turned to the topic of UX in the context of analytics, ML and AI—and what teams–especially those without trained designers on staff–need to know about creating successful data products.

    In our chat, we covered:

    • Jared’s definition of “design”
    • The definition of UX outcomes, and who should be responsible for defining and delivering them
    • Understanding the “value chain” of user experience and the idea that “everyone” creating the solution is a designer and responsible for UX
    • Brian’s take on the current state of data and AI-awareness within the field of UX —and whether Jared agrees with Brian’s perceptions
    • Why teams should use visual aids to drive change and innovation, and two tools they can use to execute this
    • The relationship between data literacy and design
    • The type of math training Jared thinks is missing in education and why he thinks it should replace calculus in high school -- Examples of how UX design directly addresses privacy and ethical issues with intelligent devices
    • Some example actions that leaders who are new to the UX profession can do immediately to start driving more value with data products
    Quotes from Today’s Episode

    “Center Centre is a school in Chattanooga for creating UX designers, and it's also the name of the professional development business that we've created around it that helps organizations create and exude excellence in terms of making UX design and product services…” - Jared

    “The reality is this: on the other side of all that data, there are people. There's the direct people who are interacting with the data directly, interacting with the intelligence interacting with the various elements of what's going on, but at the same time, there's indirect folks. If someone is making decisions based on that intelligence, those decisions affect somebody else's life.” - Jared

    “I think something that's missing frequently here is the inability to think beyond the immediate customer who requests a solution.” Brian

    “The fact that there are user experience teams anywhere is sort of a new and novel thing. A decade ago, that was very unlikely that you'd go into a business and there’d be a user experience team of any note that had any sort of influence across the business.” - Jared

    [At Netflix], we'd probably put the people who work in the basement on [server and network] performance at the opposite side of the chart from the people who work on the user interface or what we consider the user experience of Netflix […] Except at that one moment where someone's watching their favorite film, and that little spinny thing comes up, and the film pauses, and the experience is completely interrupted. And it's interrupted because the latency, and the throughput, and the resilience of the network are coming through to the user interface. And suddenly, that group of people in the basement are the most important UX designers at Netflix. - Jared

    My feeling is, with the exception of perhaps the FANG companies, the idea of designers being required, or part of the equation when we're developing probabilistic solutions that use machine learning etc., well, it's not even part of the conversation with most user experience leaders that I talk to. - Brian

    Links
    • Center Centre website

    053 - Creating (and Debugging) Successful Data Product Teams with Jesse Anderson Dec 01, 2020

    In this episode of Experiencing Data, I speak with Jesse Anderson, who is Managing Director of the Big Data Institute and author of a new book

    titled, Data Teams: A Unified Management Model for Successful Data-Focused Teams. Jesse opens up about why teams often run into trouble in their efforts to build data products, and what can be done to drive better outcomes.

    In our chat, we covered:

    • Jesse’s concept of debugging teams
    • How Jesse defines a data product, how he distinguishes them from software products
    • What users care about in useful data products
    • Why your tech leads need to be involved with frontline customers, users, and business leaders
    • Brian’s take on Jesse’s definition of a “data team” and the roles involved-especially around two particular disciplines
    • The role that product owners tend to play in highly productive teams
    • What conditions lead teams to building the wrong product
    • How data teams are challenged to bring together parts of the company that never talk to each other – like business, analytics, and engineering teams
    • The differences in how tech companies create software and data products, versus how non-digital natives often go about the process
    Quotes from Today’s Episode

    “I have a sneaking suspicion that leads and even individual contributors will want to read this book, but it’s more [to provide] suggestions for middle,upper management, and executive management.” – Jesse

    “With data engineering, we can’t make v1 and v2 of data products. We actually have to make sure that our data products can be changed and evolve, otherwise we will be constantly shooting ourselves in the foot. And this is where the experience or the difference between a data engineer and software engineer comes into place.” – Jesse

    “I think there’s high value in lots of interfacing between the tech leads and whoever the frontline customers are…” – Brian

    “In my opinion-and this is what I talked about in some of the chapters-the business should be directly interacting with the data teams.” – Jesse

    “[The reason] I advocate so strongly for having skilled product management in [a product design] group is because they need to be shielding teams that are doing implementation from the thrashing that may be going on upstairs.” – Brian

    “One of the most difficult things of data teams is actually bringing together parts of the company that never talk to each other.” – Jesse

    Links
    • Big Data Institute
    • Data Teams: A Unified Management Model for Successful Data-Focused Teams
    • Follow Jesse on Twitter
    • Connect with Jesse on LinkedIn

    052 - Reasons Automated Decision Making with Machine Learning Can Fail with James Taylor Nov 17, 2020

    In this episode of Experiencing Data, I sat down with James Taylor, the CEO of Decision Management Solutions. This discussion centers around how enterprises build ML-driven software to make decisions faster, more precise, and more consistent-and why this pursuit may fail.

    We covered:

    • The role that decision management plays in business, especially when making decisions quickly, reliably, consistently, transparently and at scale.
    • The concept of the "last mile," and why many companies fail to get their data products across it
    • James' take on operationalization of ML models, why Brian dislikes this term
    • Why James thinks it is important to distinguish between technology problems and organizational change problems when leveraging ML.
    • Why machine learning is not a substitute for hard work.
    • What happens when human-centered design is combined with decision management.
    • James's book, Digital Decisioning: How to Use Decision Management to Get Business Value from AI, which lays out a methodology for automating decision making.
    Quotes from Today's Episode

    "If you're a large company, and you have a high volume transaction where it's not immediately obvious what you should do in response to that transaction, then you have to make a decision - quickly, at scale, reliably, consistently, transparently. We specialize in helping people build solutions to that problem." - James

    "Machine learning is not a substitute for hard work, for thinking about the problem, understanding your business, or doing things. It's a way of adding value. It doesn't substitute for things." - James

    "One thing that I kind of have a distaste for in the data science space when we're talking about models and deploying models is thinking about 'operationalization' as something that's distinct from the technology-building process." - Brian

    "People tend to define an analytical solution, frankly, that will never work because[…] they're solving the wrong problem. Or they build a solution that in theory would work, but they can't get it across the last mile. Our experience is that you can't get it across the last mile if you don't begin by thinking about the last mile." - James

    "When I look at a problem, I'm looking at how I use analytics to make that better. I come in as an analytics person." - James

    "We often joke that you have to work backwards. Instead of saying, 'here's my data, here's the analytics I can build from my data […], you have to say, 'what's a better decision look like? How do I make the decision today? What analytics will help me improve that decision?' How do I find the data I need to build those analytics?' Because those are the ones that will actually change my business." - James

    "We talk about [the last mile] a lot ... which is ensuring that when the human beings come in and touch, use, and interface with the systems and interfaces that you've created, that this isthe make or break point-where technology goes to succeed or die." - Brian

    Links
    • Decision Management Solutions
    • Digital Decisioning: How to Use Decision Management to Get Business Value from AI
    • James' Personal Blog
    • Connect with James on Twitter
    • Connect with James on LinkedIn

    051 - Methods for Designing Ethical, Human-Centered AI with Undock Head of Machine Learning, Chenda Bunkasem Nov 03, 2020

    Chenda Bunkasem is head of machine learning at Undock, where she is focusing on using quantitative methods to influence ethical design. In this episode of Experiencing Data, Chenda and I explore her actual methods to designing ethical AI solutions as well as how she works with UX and product teams on ML solutions.

    We covered:

    • How data teams can actually design ethical ML models, after understanding if ML is the right approach to begin with
    • How Chenda aligns her data science work with the desired UX, so that technical choices are always in support of the product and user instead of “what’s cool”
    • An overview of Chenda’s role at Undock, where she works very closely with product and marketing teams, advising them on uses for machine learning
    • How Chenda’s approaches to using AI may change when there are humans in the loop
    • What NASA’s Technology Readiness Level (TRL) evaluation is, and how Chenda uses it in her machine learning work
    • What ethical pillars are and how they relate to building AI solutions
    • What the Delphi method is and how it relates to creating and user-testing ethical machine learning solutions
    Quotes From Today’s Episode

    “There's places where machine learning should be used and places where it doesn't necessarily have to be.” - Chenda

    “The more interpretability, the better off you always are.” - Chenda

    “The most advanced AI doesn't always have to be implemented. People usually skip past this, and they're looking for the best transformer or the most complex neural network. It's not the case. It’s about whether or not the product sticks and the product works alongside the user to aid whatever their endeavor is, or whatever the purpose of that product is. It can be very minimalist in that sense.” - Chenda

    “First we bring domain experts together, and then we analyze the use case at hand, and whatever goes in the middle — the meat, between that — is usually decided through many iterations after meetings, and then after going out and doing some sort of user testing, or user research, coming back, etc.” - Chenra, explaining the Delphi method.

    “First you're taking answers on someone's ethical pillars or a company's ethical pillars based off of their intuition, and then you're finding how that solution can work in a more engineering or systems-design fashion. “ - Chenda

    “I'm kind of very curious about this area of prototyping, and figuring out how fast can we learn something about what the problem space is, and what is needed, prior to doing too much implementation work that we or the business don't want to rewind and throw out.” - Brian

    “There are a lot of data projects that get created that end up not getting used at all.”- Brian

    Links

    Undock website

    Chenda's personal website

    Substack

    Twitter

    Instagram

    Connect with Chenda on LinkedIn


    050 - Ways to Practice Creativity and Foster Innovation When You’re An Analytical Thinker Oct 20, 2020

    50 episodes! I can’t believe it. Since it’s somewhat of a milestone for the show, I decided to do another solo round of Experiencing Data, following the positive feedback that I’ve gotten from the last few episodes. Today, I want to help you think about ways to practice creativity when you and your organization are living in an analytical world, creating analytics for a living, and thinking logically and rationally. Why? Because creativity is what leads to innovation, and the sciences says a lot of decision making is not rational. This means we have to tap things besides logical reasoning and data to bring data products to our customers that they will love...and use. (Sorry!)

    One of the biggest blockers to creativity is in the organ above your shoulders and between your ears. I frequently encounter highly talented technical professionals who find creativity to be a foreign thing reserved for people like artists. They don’t think of themselves as being creative, and believe it is an innate talent instead of a skill. If you have ever said, “I don’t have a creative bone in my body,” then this episode is for you.

    As with most technical concepts, practicing creativity is a skill most people can develop, and if you can inculcate a mix of thinking approaches into your data product and analytical solution development, you’re more likely to come up with innovative solutions that will delight your customers. The first thing to realize though is that this isn’t going to be on the test. You can’t score a “92” or a “67” out of 100. There’s no right answer to look up online. When you’re ready to let go of all that, grab your headphones and jump in. I’ll even tell you a story to get going.

    Links Referenced

    Previous podcast with Steve Rader


    049 - CxO & Digital Transformation Focus: (10) Reasons Users Can’t or Won’t Use Your Team’s ML/AI-Driven Software and Analytics Applications Oct 06, 2020

    Join the Free Webinar Related to this Episode

    I'm taking questions and going into depth about how to address the challenges in this episode of Experiencing Data on Oct 9, 2020. 30 Mins + Q/A time. Replay will also be available.

    Register Now

    Welcome back for another solo episode of Experiencing Data. Today, I am primarily focusing on addressing the non-digital natives out there who are trying to use AI/ML in innovative ways, whether through custom software applications and data products, or as a means to add new forms of predictive intelligence to existing digital experiences.

    Many non-digital native companies today tend to approach software as a technical “thing” that needs to get built, and neglect to consider the humans who will actually use it — resulting in a lack of business or organizational value emerging. While my focus will be on the design and user experience aspects that tend to impede adoption and the realization of business value, I will also talk about some organizational blockers related to how intelligent software is created that can also derail a successful digital transformation efforts.

    These aren’t the only 10 non-technical reasons an intelligent application or decision support solution might fail, but they are 10 that you can and should be addressing—now—if the success of your technology is dependent on the humans in the loop actually adopting your software, and changing their current behavior.

    Links
    • Want to address these issues? Learn about my Self-Guided Video Course and Instructor-Led Seminar
    • Subscribe to my Free DFA Insights Mailing List: https://designingforanalytics.com/mailing-list/

    048 - Good vs. Great: (10) Things that Distinguish the Best Leaders of Intelligent Products, Analytics Applications, and Decision Support Tools Sep 22, 2020

    Today I’m going solo on Experiencing Data! Over the years, I have worked with a lot of leaders of data-driven software initiatives with all sorts of titles. Today, I decided to focus the podcast episode on what I think makes the top product management and digital/software leaders stand out, particularly in the space of enterprise software, analytics applications, and decision support tools.

    This episode is for anyone leading a software application or product initiative that has to produce real value, and not just a technology output of some kind. When I recorded this episode, I largely had “product managers” in mind, but titles can vary significantly. Additionally, this episode focuses on my perspective as a product/UX design consultant and advisor, focusing specifically at the traits associated with these leaders’ ability to produce valuable, innovative solutions customers need and want. A large part of being a successful software leader also involves managing teams and other departments that aren’t directly a part of the product strategy and design/creation process, however I did not go deep into these aspects today. As a disclaimer, my ideas are not based on research. They’re just my opinions. Some of the topics I covered include:

    • The role of skepticism
    • The misunderstanding of what it means to be a “PM”
    • The way top software leaders collaborate with UX professionals, designers, and engineering/tech leads
    • How top leaders treat UX when building customer-focused technology
    • How top product management leaders define success and make a strategy design-actionable
    • The ways in which great PMs enable empathy in their teams and evangelize meaningful user research
    • The output vs. outcome mindset

    047 - How Yelp Integrates Data Science, Engineering, UX, and Product Management when Creating AI Products with Yelp’s Justin Norman Sep 08, 2020

    In part one of an excellent series on AI product management, LinkedIn Research Scientist Peter Skomoroch and O’Reilly VP of Content Strategy Mike Loukides explained the importance of aligning AI products with your business plans and strategies. In other words, they have to deliver value, and they have to be delivered on time. Unfortunately, this is much easier said than done. I was curious to learn more about what goes into the complex AI product development process, and so for answers I turned to Yelp VP of Data Science Justin Norman, who collaborated with Peter and Mike in the O’Reilly series of articles. Justin is a career data professional and data science leader with experience in multiple companies and industries, having served as director of research and data science at Cloudera Fast Forward Labs, head of applied machine learning at Fitbit, head of Cisco’s enterprise data science office, and as a big data systems engineer with Booz Allen Hamilton. He also served as a Marine Corps Officer with a focus in systems analytics. We covered:

    • Justin’s definition of a successful AI product
    • The two key components behind AI products
    • The lessons Justin learned building his first AI platform and what insights he applied when he went to Yelp.
    • Why AI projects often fail early on, and how teams can better align themselves for success.
    • Who or what Beaker and Bunsen are and how they enable Yelp to test over 700 experiments at any one time.
    • What Justin learned at an airline about approaching problems from a ML standpoint vs. a user experience standpoint—and what the cross-functional team changed as a result.
    • How Yelp incorporates designers, UX research, and product management with its technical teams
    • Why companies should analyze the AI, ML and data science stack and form a strategy that aligns with their needs.
    • The critical role of AI product management and what consideration Justin thinks is the most important when building a ML platform
    • How Justin would approach AI development if he was starting all over at a brand new company
    • Justin’s pros and cons about doing data science in the government vs. the private sector.
    Quotes from Today’s Episode

    “[My non-traditional background] gave me a really broad understanding of the full stack [...] from the physical layer all the way through delivering information to a decision-maker without a lot of time, maybe in an imperfect form, but really packaged for what we're all hoping to have, which is that value-add information to be able to do something with.” - Justin

    “It's very possible to create incredible data science products that are able to provide useful intelligence, but they may not be fast enough; they may not be [...] put together enough to be useful. They may not be easy enough to use by a layperson.” -Justin

    “Just because we can do things in AI space, even if they're automated, doesn't mean that it's actually beneficial or a value-add.” - Justin

    “I think the most important thing to focus on there is to understand what you need to be able to test and deploy rapidly, and then build that framework.” - Justin

    “I think it's important to have a product management team that understands the maturity lifecycle of building out these capabilities and is able to interject and say, ‘Hey, it's time for us to make a different investment, either in parallel, once we've reached this milestone, or this next step in the product lifecycle.’” - Justin

    “...When we talk about product management, there are different audiences. I think [Yelp’s] internal AI product management role is really important because the same concepts of thinking about design, and how people are going to use the service, and making it useful — that can apply to employees just as much as it can to the digital experience that you put out to your end customers.” -Brian

    “You hear about these enterprise projects in particular, where the only thing that ever gets done is the infrastructure. And then by the time they get something ready, it’s like the business has moved on, the opportunity's gone, or some other challenge or the team gets replaced because they haven't shown anything, and the next personcomes in and wants to do it a different way.” - Brian

    Links
    • Yelp
    • O’Reilly three-part article:
      • Part 1
      • Part 2
      • Part 3
    • Bunsen Article (The Yelp AI Platform)
    • Twitter: @JustinJDN
    • Justin’s LinkedIn

    046 - How Steelcase’s Data Science, UX, & Product Teams Are Helping Customers Design Safer Office Workplaces Informed by Covid-19 Recommendations w/ J... Aug 25, 2020

    When you think of Steelcase, their office furniture probably comes to mind. However, Steelcase is much more than just a manufacturer of office equipment. They enable their customers (workplace/workspace designers) to help those designers’ clients create useful, effective, workplaces and offices that are also safe and compliant.

    Jorge Lozano is a data science manager at Steelcase and recently participated as a practitioner and guest on an IIA webinar I gave about product design and management being the missing links in many data science and analytics initiatives. I was curious to dig deeper with Jorge about how Steelcase is enabling its customers to adjust workspaces to account for public health guidelines around COVID-19 and employees returning to their physical offices. The data science team was trying to make it easy for its design customers to understand health guidelines around seat density, employee proximity and other relevant metrics so that any workspace designs could be “checked” against public health guidelines.

    Figuring out the what, when, and how to present these health guidelines in a digital experience was a journey that Jorge was willing to share.

    We covered:

    • Why the company was struggling to understand how their [office] products came together, and how the data science group tried to help answer this.
    • The digital experience Steelcase is working on to re-shape offices for safe post-pandemic use.
    • How Steelcase is evaluating whether their health and safety recommendations were in fact safe, and making a difference.
    • How Jorge’s team transitioned from delivering “static data science” outputs into providing an enabling capability to the business.
    • What Steelcase did to help dealer designers when engaging with customers, in order to help them explain the health risks associated with their current office layouts and plans.
    • What it was like for Jorge’s team to work with a product manager and UX designer, and how it improved the process of making the workspace health guidelines useful.
    Resources and Links:
    • Steelcase: https://www.steelcase.com/
    • LinkedIn: https://www.linkedin.com/in/jorge-lozano-flores/
    Quotes from Today’s Episode

    “We really pride ourselves in research-based design” - Jorge

    “This [source data from design software] really enabled us to make very specific metrics to understand the current state of the North American office.” - Jorge

    “Using the data that we collected, we came up with samples of workstations that are representative of what our customers are more likely to have. We retrofitted them, and then we put the retrofitted desk in the lab that basically simulates the sneeze of a person, or somebody coughing, or somebody kind of spitting a little bit while they're talking, and all of that. And we're collecting some really amazing insights that can quantify the extent to which certain retrofits work in disease transmission.” - Jorge

    “I think one of the challenges is that, especially when you're dealing with a software design solution that involves probabilities, someone has to be the line-drawer.” - Brian

    “The challenge right now is how to set up a system where we can swarm at things faster, where we're more efficient at understanding the needs and [are able to get] it in the hands of the right people to make those important decisions fast? It's all pointing towards data science as an enabling capability. It's a team sport.” - Jorge


    045 - Healthcare Analytics…or Actionable Decision Support Tools? Leadership Strategies from Novant Health’s SVP of Data Products, Karl Hightower Aug 11, 2020

    Healthcare professionals need access to decision support tools that deliver the right information, at the right time. In a busy healthcare facility, where countless decisions are made on a daily basis, it is crucial that any analytical tools provided actually yield useful decision support to the target customer. In this episode, I talked to Karl Hightower from Novant Health about how he and his team define “quality” when it comes to data products, and what they do to meet that definition in their daily work. Karl Hightower is the Chief Data Officer and SVP of Data Products at Novant Health, a busy hospital and medical group in the Southeast United States with over 16 hospitals and more than 600 clinics. Karl and I took a deep dive into data product management, and how Karl and his team are designing products and services that help empower all of the organization’s decision makers. In our chat, we covered:

    • How a non-tech company like Novant Health approaches data product management
    • The challenges of designing data products with empathy in mind while being in an environment involving physicians and healthcare professionalsThe metric Karl’s team uses to judge the quality and efficacy of their data products, and how executive management contributed to defining this success criteria
    • How Karl encourages deep empathy between analytics teams and their users by deeply investigating how the users being served by the team make decisions with data
    • How and why Novant embraces design and UX in their data product work
    • The types of outcomes Karl sees when designers and user experience professionals work with analytics and data science practitioners.
    • How Karl was able to obtain end user buy-in and support for ?
    • The strategy Karl used to deal with a multitude of “information silos” resulting from the company’s numerous analytics groups.
    Resources and Links:
    • Novant Health website: https://www.novanthealth.org/
    • Novant Health LinkedIn: https://www.linkedin.com/company/novanthealth/
    • Karl Hightower LinkedIn: https://www.linkedin.com/in/karl-hightower-4528123/
    Quotes from Today’s Episode

    “I tend to think of product management as a core role along with a technical lead and product designer in the software industry. Outside the software industry, I feel like product management is often this missing hub. ” - Brian

    “I really want to understand why the person is asking for what they're asking for, so there is much more of a closer relationship between that portfolio team and their end-user community that they're working with. It's almost a day-to-day living and breathing with and understanding not just what they're asking for and why are they asking for it, but you need to understand how they use information to make decisions.” - Karl

    “I think empathy can sound kind of hand-wavy at times. Soft and fluffy, like whipped cream. However, more and more at senior levels, I am hearing how much leaders feel these skills are important because the technology can be technically right and effectively wrong.” - Brian

    “The decision that we got to on executive governance was how are we going to judge success criteria? How do we know that we're delivering the right products and that we're getting better on the maturity scale? And the metric is actually really simple. Ask the people that we're delivering for, does this give you what you need when you need it to make those decisions? - Karl

    “The number one principle is, if I don't know how something is done [created with data], I'm very unlikely to trust it. And as you look at just the nature of healthcare, transparency absolutely has to be there because we want the clinicians to poke holes in it, and we want everyone to be able to trust it. So, we are very open. We are very transparent with everything that goes in it.” - Karl

    “You need to really understand the why. You’ve got to understand what business decisions are being made, what's driving the strategy of the people who are asking for all that information.” - Karl


    044 - The Roles of Product and Design when “Competing in the Age of AI” with HBS Professor and Author Karim Lakhani Jul 28, 2020

    If there’s one thing that strikes fear into the heart of every business executive, it’s having your company become the next Blockbuster or Neiman Marcus — that is, ignoring change, and getting wiped out by digital competitors. In this episode, I dived into the changing business landscape with Karim Lakhani who is a Professor at Harvard Business School and co-author of the new book Competing in the Age of AI: When Algorithms and Networks Run the World, which he wrote with his friend and colleague at HBS, Marco Iansiti.

    We discuss how AI, machine learning, and digital operating models are changing business architecture, and disrupting traditional business models. I also pressed Karim to go a bit deeper on how, and whether he thinks product mindset and design factor in to the success of AI in today’s businesses. We also go off on a fun tangent about the music industry, which just might have to be a future episode!. In any case, I highly recommend the book. It’s particularly practical for those of you working in organizations that are not digital natives and want to hear how the featured companies in the book are setting themselves apart by leveraging data and AI in customer-facing products and in internal applications/operations. Our conversation covers:

    • Karim’s new book, Competing in the Age of AI: When Algorithms and Networks Run the World, co-authored with Marco Iansiti.
    • How digital operating models are colliding with traditional product-oriented businesses, and the impact this is having on today’s organizations.
    • The critical role of data product management that is frequently missing when companies try to leverage AI
    • Karim’s thoughts on ethics in AI and machine learning systems, and how they need to be baked into business and engineering.
    • The similarity Karim sees between COVID-19 and AI
    • The role of design, particularly in human-in-the-loop systems and how companies need to consider the human experience in applications of AI that augment decision making vs. automate it.
    • How Karim sees the ability to adapt in business as being critical to survival in the age of AI
    Resources and Links
    • Book Link: https://www.amazon.com/Competing-Age-AI-Leadership-Algorithms/dp/1633697622/
    • Twitter: https://twitter.com/klakhani
    • LinkedIn: https://www.linkedin.com/in/professorkl/
    • Harvard Business Analytics Program: https://analytics.hbs.edu/
    Quotes from Today’s Episode

    “Our thesis in the book is that a new type of an organization is emerging, which has eliminated bottlenecks in old processes.” - Karim

    “Digital operating models have exponential scaling properties, in terms of the value they generate, versus traditional companies that have value curves that basically flatten out, and have fixed capacity. Over time, these digital operating models collide with these traditional product models, win over customers, and gather huge amounts of market share….” - Karim

    “This whole question about human-in-the-loop is important, and it's not going to go away, but we need to start thinking about, well, how good are the humans, anyway? - Karim

    “Somebody once said, “Ethics defines the boundaries of what you care about.” And I think that's a really important question…” - Brian

    “Non-digital natives worry about these tech companies coming around and eating them up, and I can’t help but wonder ‘why aren't you also copying the way they design and build software?’” - Brian

    “...These established companies have a tough time with the change process.” - Karim


    043 - What a Product Management Mindset Can do for Data Science and Analytics Leaders with Product School CEO, Carlos González de Villaumbrosia Jul 14, 2020

    I am a firm believer that one of the reasons that data science and analytics has a high failure rate is a lack of product management and design. To me, product is about a mindset just as much as a job title, and I am repeatedly hearing how more and more voices in the data community are agreeing with me on this (Gartner CDO v4, International Inst. for Analytics, several O’Reilly authors, Karim Lakhani’s new book on AI, and others). This is even more true as more companies begin to leverage AI. So many of these companies fear what startups and software companies are doing, yet they do not copy the way tech companies build software applications and enable specific user experiences that unlock the desired business value.

    Integral to building software is the product management function—and when these applications and tools have humans in the loop, the product/UX design function is equally as important to ensure adoption, usability, engagement, and alignment with the business objectives.

    In modern tech companies, the overlap between product design and product management can be significant, and frequently, product leaders in tech companies come up through both design and engineering ranks and indeed my own work heavily overlaps with product. What this tells me is that product is a mindset, and it’s a role many can learn if they believe it’s critical.

    So why aren’t more data science and analytics leaders forming strong product design and analytics functions? I don’t know, so I decided to bring Carlos onto the show to talk about his company, Product School, which offers product management training and features instructors from many of the big tech companies on how to do it. In this episode, Carlos provides a comprehensive overview of why he launched Product School, what makes an effective product manager, and the importance of having structured vision and alignment when developing products.

    This conversation explores:

    • Why Carlos launched the Product School for professionals who want to learn on the side without quitting their job and putting their life on hold.
    • The type of mentality product managers need to have and whether specialization matters within product management.
    • Whether being a product manager in machine learning and AI is different than working with a traditional software product.
    • How product management is not project management
    • Advice for approaching executive decision makers about product management education
    • How to avoid the trap of focusing too heavily on process
    • How product management often leads to executive leadership roles
    • The “power trio” of engineering, product management, and design, and the value of aligning all three groups.
    • Understanding the difference between applied and academic experience
    • How the relationship between design and PM has changed over the last five years
    • What the gap looks like between a skilled PM and an exceptional one.
    Resources and Links

    The State of Product Analytics (Also referred to as The Future of Product Analytics in the audio)

    Mixpanel, company that they partnered with to create the above report

    Episode 17 of Experiencing Data

    Twitter

    Main Company Site

    ProductCon

    Productverse

    Quotes from Today’s Episode

    “You can become a product manager by building products. You don't need to be a software engineer. You don’t need to have an MBA. You don't need to be an incredible, inspiring visionary. This is stuff that you can learn, and the best way to learn it is by doing it.” - Carlos

    “A product manager is a generalist. And in order to become a generalist, usually you have to have some sort of [specialty] before. So, we define product management as the intersection in between business, engineering, and design. And you can become a good product manager from either of those options.” - Carlos

    “If you have [a power trio of technology, product, and design] and the energy is right, and the relationships are really strong, boy, you can get a lot of stuff done, and you can iterate quickly, and really produce some great stuff.” - Brian

    “I think part of the product management mindset... is to realize part of your job now is to be a problem finder, it’s to help set the strategy, it's to help ensure that a model is not the solution.” - Brian

    “I think about a bicycle wheel with the hub in the center and the spokes coming out. Product management is that hub, and it reports up into the business, but you have all these different spokes, QA, and software engineering, maybe data science and analytics, product design, and user experience design. These are all kind of spokes.” - Brian

    “These are people who are constantly learning, but not just about their products. They’re constantly learning in general. Reading books, practicing sports, doing whatever it is, but always looking at what's new and wanting to play around with it, just to be dangerous enough. So, I think those three areas: obsession with a customer based on data; obsession with empathy; and then obsession with learning, or just being curious are really critical.” - Carlos


    042 - Why Machine Learning and Analytics Alone Can’t Drive Behavioral Change inside Police Departments with Allison Weil Jun 30, 2020

    “What happened in Minneapolis and Louisville and Chicago and countlessother cities across the United States is unconscionable (and to be clear, racist). But what makes me the maddest is how easy this problem is to solve, just by the police deciding it’s a thing they want to solve.” - Allison Weil on Medium Before Allison Weil became an investor and Senior Associate at Hyde Park Ventures, she was a co-founder at Flag Analytics, an early intervention system for police departments designed to help identify officers at risk of committing harm. Unfortunately, Flag Analytics—as a business—was set up for failure from the start, regardless of its predictive capability. As Allison explains so candidly and openly in her recent Medium article (thanks Allison!), the company had “poor product-market fit, a poor problem-market fit, and a poor founder-market fit.” The technology was not the problem, and as a result, it did not help them succeed as a business or in producing the desired behavior change because the customers were not ready to act on the insights. Yet, the key takeaways from her team’s research during the design and validation of their product — and the uncomfortable truths they uncovered — are extremely valuable, especially now as we attempt to understand why racial injustice and police brutality continue to persist in law enforcement agencies. As it turns out, simply having the data to support a decision doesn’t mean the decision will be made using the data. This is what Allison found out while in her interactions with several police chiefs and departments, and it’s also what we discussed in this episode. I asked Allison to go deeper into her Medium article, and she agreed. Together, we covered:

    • How Allison and a group of researchers tried to streamline the identification of urban police officers at risk of misconduct or harm using machine learning.
    • Allison’s experience of trying to build a company and program to solve a critical societal issue, and dealing with police departments that weren’t ready to take action on the analytical insights her product revealed
    • How she went about creating a “single pane of glass,” where officers could monitor known problem officers and also discover officers who may be in danger of committing harm.
    • The barriers that prevented the project from being a success, from financial ones to a general unwillingness among certain departments to take remedial action against officers despite historical or predicted data
    • The key factors and predictors Allison’s team found in the data set of thousands of officers that correlated highly with poor officer behavior in the future—and how it seemed to fall on deaf ears
    • How Allison and her team approached the sensitive issue of race in the data, and a [perhaps unexpected] finding they discovered about how prevalent racism seemed to be in departments in general.
    • Allison’s experience of conducting “ride-alongs” (qualitative 1x1 research) where she went on patrol with officers to observe their work and how the experience influenced how her team designed the product and influenced her perspective while analyzing the police officer data set.
    Resources and Links:
    • Twitter
    • LinkedIn
    • Medium
    Quotes from Today’s Episode

    “The folks at the police departments that we were working with said they were well-intentioned, and said that they wanted to talk through, and fix the problem, but when it came to their actions, it didn't seem like [they were] really willing to make the choices that they needed to make based off of what the data said, and based off of what they knew already.” - Allison “I don't come from a policing background, and neither did any of my co-founders. And that made it really difficult to relate to different officers, and relate to departments. And so the combination of all of those things really didn't set me up for a whole lot of business success in that way.”- Allison “You can take a whole lot of data and do a bunch of analysis, but what I saw was the data didn't show anything that the police department didn't know already. It amplified some of what they knew, but [the problem here] wasn't about the data.” - Allison “It was really frustrating for me, as a founder, sure, because I was putting all this energy into trying to build a software and trying to build a company, but also just frustrating for me as a person and a citizen… you fundamentally want to solve a problem, or help a community solve a problem, and realize that the people at the center of it just aren't ready for it to be solved.” - Allison “...We did have race data, but race was not the primary predictor or reason for [brutality]. It may have been a factor, but it was not that there were racist cops wandering around, using force only against people of particular races. What we found was….” - Allison “The way complaints are filed department to department is really, really different. And so that results in complaints looking really, really different from department to department and counts looking different. But how many are actually reviewed and sustained? And that looks really, really different department to department.” - Allison “...Part of [diversity] is asking the questions you don't know to ask. And that's part of what you get out of having a diverse team— they're going to surface questions that no one else is asking about. And then you can have the discussion about what to do about them.” - Brian


    041 - Data Thinking: An Approach to Using Design Thinking to Maximize the Effectiveness of Data Science and Analytics with Martin Szugat of Datentreib... Jun 16, 2020

    The job of many internally-facing data scientists in business settings is to discover,explore, interpret, and share data, turning it into actionable insight that can benefit the company and improve outcomes. Yet, data science teams often struggle with the very basic question of how the company’s data assets can best serve the organization. Problem statements are often vague, leading to data outputs that don’t turn into value or actionable decision support in the last mile.

    This is where Martin Szugat and his team at Datentreiber step in, helping clients to develop and implement successful data strategy through hands-on workshops and training. Martin is based in Germany and specializes in helping teams learn to identify specific challenges data can solve, and think through the problem solving process with a human focus. This in turn helps teams to select the right technology and be objective about whether they need advanced tools such as ML/AI, or something more simple to produce value.

    In our chat, we covered:

    • How Datentreiber helps clients understand and derive value from their data — identifying assets, and determining relevant use cases.
    • An example of how one client changed not only its core business model, but also its culture by working with Datentreiber, transitioning from a data-driven perspective to a user-driven perspective.
    • Martin’s strategy of starting with small analytics projects, and slowly gaining buy-in from end users, with a special example around social media analytics that led to greater acceptance and understanding among team members.
    • The canvas tools Martin likes to use to visualize abstract concepts related to data strategy, data products, and data analysis.
    • Why it helps to mix team members from different departments like marketing, sales, and IT and how Martin goes about doing that
    • How cultural differences can impact design thinking, collaboration, and visualization processes.
    Resources and Links:
    • Company site (German) (English machine translation)
    • Datentreiber Open-Source Design Tools
    • Data Strategy Design (German) (English machine translation)
    • Martin’s LinkedIn
    Quotes from Today’s Episode

    “Often, [clients] already have this feeling that they're on the wrong path, but they can't articulate it. They can't name the reason why they think they are on the wrong path. They learn that they built this shiny dashboard or whatever, but the people—their users, their colleagues—don't use this dashboard, and then they learn something is wrong.” - Martin

    “I usually like to call this technically right and effectively wrong solutions. So, you did all the pipelining and engineering and all that stuff is just fine, but it didn't produce a meaningful outcome for the person that it was supposed to satisfy with some kind of decision support.” - Brian

    “A simple solution is becoming a trainee in other departments. So, ask, for example, the marketing department to spend a day, or a week and help them do their work. And just look over the shoulder, what they are doing, and really try to understand what they are doing, and why they are doing it, and how they are doing it. And then, come up with solution proposals.” - Martin

    ...I tend to think of design as a team sport, and it's a lot about facilitating groups of these different cross-departmental groups of arriving at a solution for a particular audience; a specific audience that needs a specific problem solved.” - Brian

    “[One client said] we are very good at implementing the right solutions for the wrong problems. And I think this is what often happens in data science, or business intelligence, or whatever, also in IT departments: that they are too quick in starting thinking about the solution before they understand the problem.” - Martin

    “If people don't understand what you're doing or what your analytic solution is doing, they won't use it and there will be no acceptance.” - Martin

    “One thing we practice a lot, [...] is in visualizing those abstract things like data strategy, data product, and analytics. So, we work a lot with canvas tools because we learned that if you show people—and it doesn't matter if it's just on a sticky note on a canvas—then people start realizing it, they start thinking about it, and they start asking the right questions and discussing the right things. ” - Martin


    040 – Improving Potato Chips and Space Travel: NASA’s Steve Rader on Open Innovation Jun 02, 2020

    Innovation doesn’t just happen out of thin air. It requires a conscious effort, and team-wide collaboration. At the same time, innovation will be critical for NASA if the organization hopes to remain competitive and successful in the coming years. Enter Steve Rader. Steve has spent the last 31 years at NASA, working in a variety of roles including flight control under the legendary Gene Kranz, software development, and communications architecture. A few years ago, Steve was named Deputy Director for the Center of Excellence for Collaborative Innovation. As Deputy Director, Steve is spearheading the use of open innovation, as well as diversity thinking. In doing so, Steve is helping the organization find more effective ways of approaching and solving problems. In this fascinating discussion, Steve and Brian discuss design, divergent thinking, and open innovation plus:

    • Why Steve decided to shift away from hands-on engineering and management to the emerging field of open innovation, and why NASA needs this as well as diversity in order to remain competitive.
    • The challenge of convincing leadership that diversity of thought matters, and why the idea of innovation often receives pushback.
    • How NASA is starting to make room for diversity of thought, and leveraging open innovation to solve challenges and bring new ideas forward.
    • Examples of how experts from unrelated fields help discover breakthroughs to complex and greasy problems, such as potato chips!
    • How the rate of technological change is different today, why innovation is more important than ever, and how crowdsourcing can help streamline problem solving.
    • Steve’s thoughts on the type of leader that’s needed to drive diversity at scale, and why that person should be a generalistPrioritizing outcomes over outputs, defining problems, and determining what success looks like early on in a project.
    • The metrics a team can use to measure whether one is “doing innovation.”
    Resources and Links

    Designingforanalytics.com/theseminar Steve Rader’s LinkedIn: https://www.linkedin.com/in/steve-rader-92b7754/ NASA Solve: nasa.gov/solve Steve Rader’s Twitter: https://twitter.com/SteveRader NASA Solve Twitter: https://twitter.com/NASAsolve

    Quotes from Today’s Episode

    “The big benefit you get from open innovation is that it brings diversity into the equation […]and forms this collaborative effort that is actually really, really effective.” – Steve “When you start talking about innovation, the first thing that almost everyone does is what I call the innovation eye-roll. Because management always likes to bring up that we’re innovative or we need innovation. And it just sounds so hand-wavy, like you say. And in a lot of organizations, it gets lots of lip service, but almost no funding, almost no support. In most organizations, including NASA, you’re trying to get something out the door that pays the bills. Ours isn’t to pay the bills, but it’s to make Congress happy. And, when you’re doing that, that is a really hard, rough space for innovation.” – Steve “We’ve run challenges where we’re trying to improve a solar flare algorithm, and we’ve got, like, a two-hour prediction that we’re trying to get to four hours, and the winner of that in the challenge ends up to be a cell phone engineer who had an undergraduate degree from, like, 30 years prior that he never used in heliophysics, but he was able to take that extracting signal from noise math that they use in cell phones, and apply it to heliophysics to get an eight-hour prediction capability.” – Steve “If you look at how long companies stay around, the average in 1958 was 60 years, it is now less than 18. The rate of technology change and the old model isn’t working anymore. You can’t actually get all the skills you need, all the diversity. That’s why innovation is so important now, is because it’s happening at such a rate, that companies—that didn’t used to have to innovate at this pace—are now having to innovate in ways they never thought.” – Steve “…Innovation is being driven by this big technology machine that’s happening out there, where people are putting automation to work. And there’s amazing new jobs being created by that, but it does take someone who can see what’s coming, and can see the value of augmenting their experts with diversity, with open innovation, with open techniques, with innovation techniques, period.” – Steve “…You have to be able to fail and not be afraid to fail in order to find the real stuff. But I tell people, if you’re not willing to listen to ideas that won’t work, and you reject them out of hand and shut people down, you’re probably missing out on the path to innovation because oftentimes, the most innovative ideas only come after everyone’s thrown in 5 to 10 ideas that actually won’t work.” – Steve


    039 – How PEX Fingerprinted 20 Billion Audio and Video Files and Turned It Into a Product to Help Musicians, Artists and Creators Monetize their Work May 19, 2020

    Every now and then, I like to insert a music-and-data episode into the show since hey, I’m a musician, and I’m the host 😉 Today is one of those days!

    Rasty Turek is founder and CEO of Pex, a leading analytics and rights management platform used for discovering and tracking video and audio content using data science.

    Pex’s AI crawls the internet for user-generated content (UGC), identifies copyrighted audio/ visual content, indexes the media, and then enables rights holders to understand where their art is being used so it can be monetized. Pex’s goal is to help its customers understand who is using their licensed content, and what they are using it for — along with key insights to support monetization initiatives and negotiations with UGC platform providers.

    In this episode of Experiencing Data, we discuss:

    • How the data science behind Pex works in terms of being able to fingerprint actual songs (the underlying IP of a composition) vs. masters (actual audio recordings of songs)
    • The challenges PEX has in identifying complex, audio-rich user-generated content and cover recordings, and ensuring it is indexing as many usages as possible.
    • The transitioning UGC market, and how Pex is trying to facilitate change. One item that Rasty discusses is Europe’s new Copyright Directive law, and how it’s impacting UGC from a licensing standpoint.
    • How analytics are empowering publishers, giving them key insights and firepower to negotiate with UGC platforms over licensed content.
    • Key product design and UX considerations that Pex has taken to make their analytics useful to customers
    • What Rasty learned through his software iteration journey at Pex, including a memorable example about bias that influenced future iterations of the design/UI/UX
    • How Pex predicts and priorities monetization opportunities for customers, and how they surface infringements.
    • Why copyright education is the “last bastion of the internet” — and the role that Pex is playing in streamlining copyrighted material.

    Brian also challenges Rasty directly, asking him how the Pex platform balances flexibility with complexity when dealing with extremely large data sets.

    Resources and Links

    Designingforanalytics.com/theseminar

    Pex.com

    Twitter: https://twitter.com/synopsi

    Quotes from Today’s Episode

    “I will say, 80 to 90 percent of the population eventually will be rights owners of some sort, since this is how copyright works. Everybody that produces something is immediately a rights owner, but I think most of us will eventually generate our livelihood through some form of IP, especially if you believe that the machines are going to take the manual labor from us.” - Rasty

    “When people ask me how it is to run a big data company, I always tell them I wish we were not [a big data company], because I would much rather have “small data,” and have a very good business, rather than big data.” - Rasty

    “There's a lot of these companies that [have operated] in this field for 20 to 30 years, we just took it a little bit further. We adjusted it towards the UGC world, and we focused on simplicity” - Rasty

    “We don't follow users, we follow content. And so, at some point [during our design process] we were exploring if we could follow users [of our customers’ copyrighted content].... As we explored this more, we started noticing that [our customers] started making incorrect decisions because they were biased towards users [of their copyrighted content].” - Rasty

    “If you think that your general customer is a coastal elite, but the reality is that they are Midwest farmers, you don't want to see that as the reality and you start being biased towards that. So, we immediately started removing that data and really focused on the content itself—because that content is not biased.” - Rasty

    “[Re: PEX’s design process] We always started with the guiding principles. What is the task that you're trying to solve? So, for instance, if your task is to monetize your content, then obviously you want to monetize the most obvious content that will get the most views, right?.” - Rasty


    038 – (Special Co-Hosted Episode) Brian and Mark Bailey Discuss 10 New Design and UX Considerations for Creating ML and AI-Driven Products and Applica... May 05, 2020

    Mark Bailey is a leading UX researcher and designer, and host of the Design for AI podcast — a

    program which, similar to Experiencing Data, explores the strategies and considerations around designing data-driven human-centered applications built with machine learning and AI.

    In this episode of Experiencing Data — co-released with the podcast Design for AI — Brian and Mark share the host and guest role, and discuss 10 different UX concepts teams may need to consider when approaching ML-driven data products and AI applications. A great discussion on design and #MLUX ensued, covering:

    • Recognizing the barrier of trust and adoption that exists with ML, particularly at non-digital native companies, and how to address it when designing solutions.
    • Why designers need to dig beyond surface level knowledge of ML, and develop a comprehensive understanding of the space
    • How companies attempt to “separate reality from the movies,” with AI and ML, deploying creative strategies to build trust with end users (with specific examples from Apple and Tesla)
    • Designing for “undesirable results” (how to gracefully handle the UX when a model produces unexpected predictions)
    • The ongoing dance of balancing UX with organizational goals and engineering milestones
    • What designers and solution creators need to be planning for and anticipating with AI products and applications
    • Accessibility considerations with AI products and applications – and how itcan be improved
    • Mark’s approach to ethics and community as part of the design process.
    • The importance of systems design thinking when collecting data and designing models
    • The different model types and deployment considerations that affect a solution’s UX — and what solution designers need to know to stay ahead
    • Collaborating, and visualizing — or storyboarding — with developers, to help understand data transformation and improve model design
    • The role that designers can play in developing model transparency (i.e. interpretability and explainable AI)
    • Thinking about pain points or problems that can be outfitted with decision support or intelligence to make an experience better
    Resources and Links:

    Designing for AI Podcast

    Designing for AI

    Experiencing Data – Episode 35

    Designing for Analytics Seminar

    Seeing Theory

    Measuring U

    Contact Brian

    @DesignforAI

    Quotes from Today’s Episode

    “There’s not always going to be a software application that is the output of a machine learning model or something like that. So, to me, designers need to be thinking about decision support as being the desired outcome, whatever that may be.” – Brian

    “… There are [about] 30 to 40 different types of machine learning models that are the most popular ones right now. Knowing what each one of them is good for, as the designer, really helps to conform the machine learning to the problem instead of vice versa.” – Mark

    “You can be technically right and effectively wrong. All the math part [may be] right, but it can be ineffective if the human adoption piece wasn’t really factored into the solution from the start.” – Brian

    “I think it’s very interesting to see what some of the big companies have done, such as Apple. They won’t use the term AI, or machine learning in any of their products. You’ll see their chips, they call them neural engines instead have anything to do with AI. I mean, so building the trust, part of it is trying to separate out reality from movies.” – Mark

    “Trust and adoption is really important because of the probabilistic nature of these solutions. They’re not always going to spit out the same thing all the time. We don’t manually design every single experience anymore. We don’t always know what’s going to happen, and so it’s a system that we need to design for.” – Brian

    “[Thinking about] a small piece of intelligence that adds some type of value for the customer, that can also be part of the role of the designer.” – Brian

    “For a lot of us that have worked in the software industry, our power trio has been product management, software engineering lead, and some type of design lead. And then, I always talk about these rings, like, that’s the close circle. And then, the next ring out, you might have some domain experts, and some front end developer, or prototyper, a researcher, but at its core, there were these three functions there. So, with AI, is it necessary, now, that we add a fourth function to that, especially if our product was very centered around this? That’s the role of the data scientist. And so, it’s no longer a trio anymore.” – Brian


    037 – A VC Perspective on AI and Building New Businesses Using Machine Intelligence featuring Rob May of PJC Apr 21, 2020

    Rob May is a general partner at PJC, a leading venture capital firm. He was previously CEO of Talla, a platform for AI and automation, as well as co-founder and CEO of Backupify. Rob is an angel investor who has invested in numerous companies, and author of InsideAI which is said to be one of the most widely-read AI newsletters on the planet.

    In this episode, Rob and I discuss AI from a VC perspective. We look into the current state of AI, service as a software, and what Rob looks for in his startup investments and portfolio companies. We also investigate why so many companies are struggling to push their AI projects forward to completion, and how this can be improved. Finally, we outline some important things that founders can do to make products based on machine intelligence (machine learning) attractive to investors.

    In our chat, we covered:

    • The emergence of service as a software, which can be understood as a logical extension of “software eating the world” and the 2 hard things to get right (Yes, you read it correctly and Rob will explain what this new SAAS acronym means!) !
    • How automation can enable workers to complete tasks more efficiently and focus on bigger problems machines aren’t as good at solving
    • Why AI will become ubiquitous in business—but not for 10-15 years
    • Rob’s Predict, Automate, and Classify (PAC) framework for deploying AI for business value, and how it can help achieve maximum economic impact
    • Economic and societal considerations that people should be thinking about when developing AI – and what we aren’t ready for yet as a society
    • Dealing with biases and stereotypes in data, and the ethical issues they can create when training models
    • How using synthetic data in certain situations can improve AI models and facilitate usage of the technology
    • Concepts product managers of AI and ML solutions should be thinking about
    • Training, UX and classification issues when designing experiences around AI
    • The importance of model-market fit. In other words, whether a model satisfies a market demand, and whether it will actually make a difference after being deployed.
    Resources and Links:

    Email Rob@pjc.vc

    PJC

    Talla

    SmartBid

    The PAC Framework for Deploying AI

    Twitter: @robmay

    Sign up for Rob’s Newsletter

    Quotes from Today’s Episode

    “[Service as a software] is a logical extension of software eating the world. Software eats industry after industry, and now it’s eating industries using machine learning that are primarily human labor focused.” — Rob

    “It doesn’t have to be all digital. You could also think about it in terms of restaurant automation, and some of those things where if you keep the interface the same to the customer—the service you’re providing—you strip it out, and everything behind that, if it’s digital it’s an algorithm and if it’s physical, then you use a robot.” — Rob, on service as a software.

    “[When designing for] AI you really want to find some way to convey to the user that the tool is getting smarter and learning.”— Rob

    “There’s a gap right now between the business use cases of AI and the places it’s getting adopted in organizations,” — Rob

    “The reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob

    “If you are changing things and your business is changing, which is most businesses these days, then it’s going to help to have models around that can learn and grow and adapt. I think as we get better with different data types—not just text and images, but more and more types of data types—I think every business is going to deploy AI at some stage.” — Rob

    “The general sense I get is that overall, putting these models and AI solutions is pretty difficult still.” — Brian

    “They’re not looking at what’s the actual best use of AI for their business, [and thinking] ‘Where could you really apply to have the most economic impact?’ There aren’t a lot of people that have thought about it that way.” — Rob, on how AI is being misapplied in the enterprise.

    “You have to focus on the outcome, not just the output.” — Brian

    “We need more heuristics for how, as a product manager, you think of AI and building it into products.” — Rob

    “When the internet came about, it impacted almost every business in some way, shape, or form.[…]he reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob

    “Some biases and stereotypes are true, and so what happens if the AI uncovers one that we’re really uncomfortable with?” — Rob


    036 – How Higher-Ed Institutions are Using AI and Analytics to Better Serve Students with Professor of Learning Informatics and Edtech Expert Simon Bu... Apr 07, 2020

    Simon Buckingham Shum is Professor of Learning Informatics at Australia’s University of Technology Sydney (UTS) and Director of the Connected Intelligence Centre (CIC)—an innovation center where students and staff can explore education data science applications. Simon holds a Ph.D from the University of York, and is known for bringing a human-centered approach to analytics and development. He also co-founded the Society for Learning Analytics Research (SoLAR), which is committed to advancing learning through ethical, educationally sound data science.

    In this episode, Simon and I discuss the state of education technology (edtech), privacy, human-centered design in the context of using AI in higher ed, and the numerous technological advancements that are re-shaping the higher level education landscape.

    Our conversation covered:

    • How the hype cycle around big data and analytics is starting to pervade education
    • The differences between using BI and analytics to streamline operations, improve retention rates, vs. the ways AI and data are used to increase learning and engagement
    • Creating systems that teachers see as interesting and valuable, in order to drive user adoption and avoid friction.
    • The more difficult-to-design-for, but more important skills and competencies researchers are working on to prepare students for a highly complex future workplace
    • The data and privacy issues that must be factored into ethical solution designs
    • Why “learning is not shopping,” meaning we the creators of the tech have to infer what goes on in the mind when studying humans, mostly by studying behavior.
    • Why learning scientists and educational professionals play an important role in the edtech design process, in addition to technical workers
    • How predictive modeling can be used to identify students who are struggling—and the ethical questions that such solutions raise.
    Resources and Links

    Designing for Analytics

    simon.buckinghamshum.net

    Simon on LinkedIn

    #experiencingdata

    Designing for Analytics Podcast

    Quotes from Today’s Episode

    “We are seeing AI products coming out. Some of them are great, and are making a huge difference for learning STEM type subjects— science, tech, engineering, and medicine. But some of them are not getting the balance right.” — Simon

    “The trust break-down will come, and has already come in certain situations, when students feel they’re being tracked…” — Simon, on students perceiving BI solutions as surveillance tools instead of beneficial

    “Increasingly, it’s great to see so many people asking critical questions about the biases that you can get in training data, and in algorithms as well. We want to ask questions about whether people are trusting this technology. It’s all very well to talk about big data and AI, etc., but ultimately, no one’s going to use this stuff if they don’t trust it.” — Simon

    “I’m always asking what’s the user experience going to be? How are we actually going to put something in front of people that they’re going to understand…” — Simon

    “There are lots of success stories, and there are lots of failure stories. And that’s just what you expect when you’ve got edtech companies moving at high speed.” — Simon

    “We’re dealing, on the one hand, with poor products that give the whole field a bad name, but on the other hand, there are some really great products out there that are making a tangible difference, and teachers are extremely enthusiastic about.” — Simon

    “There’s good evidence now, about the impact that some of these tools can have on learning. Teachers can give some homework out, and the next morning, they can see on their dashboard which questions were the students really struggling with.” — Simon

    “The area that we’re getting more and more interested in, and which educators are getting more and more interested in, are the kinds of skills and competencies you need for a very complex future workplace.” — Simon

    “We obviously want the students’ voice in the design process. But that has to be balanced with all the other voices are there as well, like the educators’ voice, as well as the technologists, and the interaction designers and so forth.” — Simon on the nuance of UX considerations for students

    “…you have to balance satisfying the stakeholder with actually what is needed.” — Brian

    “…we’re really at the mercy of behavior. We have to try and infer, from behavior or traces, what’s going on in the mind, of the humans we are studying.” — Simon

    “We might say, “Well, if we see a student writing like this, using these kinds of textual features that we can pick up using natural language processing, and they revise their draft writing in response to feedback that we’ve provided automatically, well, that looks like progress. It looks like they’re thinking more critically, or it looks like they’re reflecting more deeply on an experience they’ve had, for example, like a work placement.” — Simon

    “They’re in products already, and when they’re used well, they can be effective. But they can also be sort of weapon of mass destruction if you use them badly.” — Simon, on predictive models


    035 – Future Ethics Author and Designer Cennydd Bowles Shares Strategies for Designing Ethical Data Products That Benefit Our Business, Community and ... Mar 24, 2020

    Cennydd Bowles is a London-based digital product designer and futurist, with almost two decades of consulting experience working with some of the largest and most influential brands in the world. Cennydd has earned a reputation as a trusted guide, helping companies navigate complex issues related to design, technology, and ethics. He’s also the author of Future Ethics, a book which outlines key ethical principles and methods for constructing a fairer future.

    In this episode, Cennydd and I explore the role that ethics plays in design and innovation, and why so many companies today—in Silicon Valley and beyond—are failing to recognize the human element of their technological pursuits. Cennydd offers his unique perspective, along with some practical tips that technologists can use to design with greater mindfulness and consideration for others.

    In our chat, we covered topics from Cennydd’s book and expertise including:

    • Why there is growing resentment towards the tech industry and the reason all companies and innovators need to pay attention to ethics
    • The importance of framing so that teams look beyond the creation of an “ethical product / solution” and out towards a better society and future
    • The role that diversity plays in ethics and the reason why homogenous teams working in isolation can be dangerous for an organization and society
    • Cennydd’s “front-page test,” “designated dissenter,” and other actionable ethics tips that innovators and data product teams can apply starting today
    • Navigating the gray areas of ethics and how large companies handle them
    • The unfortunate consequences that arise when data product teams are complacent
    • The fallacy that data is neutral—and why there is no such thing as “raw” data
    • Why stakeholders must take part in ethics conversations
    Resources and Links:

    Cennydd Bowles

    Future Ethics (book)

    Design for Real Life

    The Trouble with Bias

    Twitter: @cennydd

    Quotes from Today’s Episode

    “There ought to be a clearer relationship between innovation and its social impacts.” — Cennydd

    “I wouldn’t be doing this if I didn’t think there was a strong upside to technology, or if I didn’t think it couldn’t advance the species.” — Cennydd

    “I think as our power has grown, we have failed to use that power responsibly, and so it’s absolutely fair that we be held to account for those mistakes.” — Cennydd

    “I like to assume most creators and data people are trying to do good work. They’re not trying to do ethically wrong things. They just lack the experience or tools and methods to design with intent.” — Brian

    “Ethics is about discussion and it’s about decisions; it’s not about abstract theory.” — Cennydd

    “I have seen many times diversity act as an ethical early warning system [where] people who firmly believe the solution they’re about to put out into the world is, if not flawless, pretty damn close.” — Cennydd

    “The ethical questions around the misapplication or the abuse of data are strong and prominent, and actually have achieved maybe even more recognition than other forms of harm that I talk about.” — Cennydd

    “There aren’t a whole lot of ethical issues that are black and white.” — Cennydd

    “When you never talk to a customer or user, it’s really easy to make choices that can screw them at the benefit of increasing some KPI or business metric.” — Brian

    “I think there’s really talented people in the data space who actually understand bias really well, but when they think about bias, they think they’re thinking more about, ‘how is it going to skew the insight from the data?’ Not the human impact.” — Brian

    “I think every business has almost a moral duty to take their consequences seriously.” — Cennydd


    034 – ML & UX: To Augment or Automate? Plus, Rating Overall Analytics Efficacy with Eric Siegel, Ph.D. Mar 10, 2020

    Eric Siegel, Ph.D. is founder of the Predictive Analytics World and Deep Learning World conference series, executive editor of “The Predictive Analytics Times,” and author of “Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.” A former Columbia University professor and host of the Dr. Data Show web series, Siegel is a renowned speaker and educator who has been commissioned for more than 100 keynote addresses across multiple industries. Eric is best known for making the “how” and “why” of predictive analytics (aka machine learning) understandable and captivating to his audiences.

    In our chat, we covered:

    • The value of defining business outcomes and end user’s needs prior to starting the technical work of predictive modeling, algorithms, or software design.
    • The idea of data prototypes being used before engaging in data science to determine where models could potentially fail—saving time while improving your odds of success.
    • The first and most important step of Eric’s five-step analytics deployment plan
    • Getting multiple people aligned and coordinated about pragmatic considerations and practical constraints surrounding ML project deployment.
    • The score (1-10) Eric gave the data community on its ability to turn data into value
    • The difference between decision support and decision automation and what the Central Intelligence Agency’s CDAO thinks about these two methods for using machine learning.
    • Understanding how human decisions are informed by quantitative predictions from predictive modes, and what’s required to deliver information in a way that aligns with their needs.
    • How Eric likes to bring agility to machine learning by deploying and scaling models incrementally to mitigate risk
    • Where the analytics field currently stands in its overall ability to generate value in the last mile.
    Resources and Links:

    Machine Learning Week

    #experiencingdata

    PredictiveAnalyticsWorld.com

    ThePredictionBook.com

    Dr. Data Show

    Twitter: @predictanalytic

    Quotes from Today’s Episode

    “The greatest pitfall that hinders analytics is not to properly plan for its deployment.” — Brian, quoting Eric

    “You don’t jump to number crunching. You start [by asking], ‘Hey, how is this thing going to actually improve business?’ “ — Eric

    “You can do some preliminary number crunching, but don’t greenlight, trigger, and go ahead with the whole machine learning project until you’ve planned accordingly, and iterated. It’s a collaborative effort to design, target, define scope, and ultimately greenlight and execute on a full-scale machine learning project.” — Eric

    “If you’re listening to this interview, it’s your responsibility.” — Eric, commenting on whose job it is to define the business objective of a project.

    “Yeah, so in terms of if 10 were the highest potential [score], in the sort of ideal world where it was really being used to its fullest potential, I don’t know, I guess I would give us a score of [listen to find out!]. Is that what Tom [Davenport] gave!?” — Eric, when asked to rate the analytics community on its ability to deliver value with data

    “We really need to get past our outputs, and the things that we make, the artifacts and those types of software, whatever it may be, and really try to focus on the downstream outcome, which is sometimes harder to manage, or measure … but ultimately, that’s where the value is created.” — Brian

    “Whatever the deployment is, whatever the change from the current champion method, and now this is the challenger method, you don’t have to jump entirely from one to the other. You can incrementally deploy it. So start by saying well, 10 percent of the time we’ll use the new method which is driven by a predictive model, or by a better predictive model, or some kind of change. So in the change in the transition, you sort of do it incrementally, and you mitigate your risk in that way.”— Eric


    033 - How Vidant Health’s Data Team Creates Empathetic Data Products and Ethical Machine Learning Models with Greg Nelson Feb 25, 2020

    Greg Nelson is VP of data analytics at Vidant Health, as well as an adjunct faculty member at Duke University. He is also the author of the “Analytics Lifecycle Toolkit,” which is a manual for integrating data management technologies. A data evangelist with over 20 years of experience in analytics and advisory, Nelson is widely known for his human-centered approach to analytics. In this episode, Greg and I explore what makes a data product or decision support application indispensable, specifically in the complex world of healthcare. In our chat, we covered:

    • Seeing through the noise and identifying what really matters when designing data products
    • The type of empathy training Greg and his COO are rolling out to help technical data teams produce more useful data products
    • The role of data analytics product management and why this is a strategic skillset at Vidant
    • The AI Playbook Greg uses at Vidant Health and their risk-based approach to assessing how they will validate the quality of a data product
    • The process Greg uses to test and handle algorithmic bias and how this is linked to credibility in the data products they produce
    • How exactly design thinking helps Greg’s team achieve better results, trust and credibility
    • How Greg aligns workflows, processes, and best practice protocols when developing predictive models
    Resources and Links:

    Vidant Health Analytics Lifecycle Toolkit Greg Nelson’s article “Bias in Artificial Intelligence” Greg Nelson on LinkedIn Twitter: @GregorySNelson Video: Tuning a card deck for human-centered co-design of Learning Analytics

    Quotes from Today's Episode

    “We'd rather do fewer things and do them well than do lots of things and fail.”— Greg

    “In a world of limited resources, our job is to make sure we're actually building the things that matter and that will get used. Product management focuses the light on use case-centered approaches and design thinking to actually come up with and craft the right data products that start with empathy.”— Greg

    “I talk a lot about whole-brain thinking and whole-problem thinking. And when we understand the whole problem, the whole ‘why’ about someone's job, we recognize pretty quickly why Apple was so successful with their initial iPod.”— Greg

    “The technical people have to get better [...] at extracting needs in a way that is understandable, interpretable, and really actionable, from a technology perspective. It's like teaching someone a language they never knew they needed. There's a lot of resistance to it.” — Greg

    “I think deep down inside, the smart executive knows that you don’t bat .900 when you're doing innovation.” — Brian

    “We can use design thinking to help us fail a little bit earlier, and to know what we learned from it, and then push it forward so that people understand why this is not working. And then you can factor what you learned into the next pass.” — Brian

    “If there's one thing that I've heard from most of the leaders in the data and analytics space, with regards particularly to data scientists, it’s [the importance of] finding this “other” missing skill set, which is not the technical skillset. It's understanding the human behavioral piece and really being able to connect the fact that your technical work does have this soft skill stuff.” — Brian

    “At the end of the day, I tell people our mission is to deliver data that people can trust in a way that's usable and actionable, built on a foundation of data literacy and dexterity. That trust in the first part of our core mission is essential.”— Greg


    032 - How and Why Talented Analytical Minds Leave People Scratching Their Head Around Data with Nancy Duarte Feb 11, 2020

    Nancy Duarte is a communication expert and the leader of the largest design firm in Silicon Valley, Duarte, Inc. She has more than 30 years of experience working with global companies and counts eight of the top ten Fortune 500 brands in her clientele. She is the author of six books, and her work as appeared in Fortune, Time Magazine, Forbes, Wired, Wall Street Journal, New York Times, Los Angeles Times, Cosmopolitan Magazine, and CNN.

    In this episode, Nancy and I discussed some of the reasons analytics and data experts fail to effectively communicate the insights and value around data. She drew from her key findings in her work as a communication expert that she details in her new book, Data Story, and the importance of communicating data through the natural structure of storytelling.

    In our chat, we covered:

    • How empathy is tied to effective communication.
    • Biases that cloud our own understanding of our communication skills
    • How to communicate an enormous amount of data effectively and engagingly
    • What’s wrong with sharing traditional presentations as a reading asset and Nancy’s improved replacement for them in the enterprise
    • The difference in presenting data in business versus scientific settings
    • Why STEAM, not STEM, is relevant to effective communication for data professionals and what happens when creativity and communication aren’t taught
    • How the brain reacts differently when it is engaged through a story
    Resources and Links:

    Nancy Duarte on LinkedIn

    Twitter: @nancyduarte

    Slidedocs

    Duarte DataStory

    Quotes from Today’s Episode

    “I think the biggest struggle for analysts is they see a lot of data.” —Nancy

    “In a business context, the goal is not to do perfect research most of the time. It’s actually to probably help inform someone else’s decision-making.” —Nancy

    “Really understand empathy, become a bit of a student of story, and when you start to apply.” these, you’ll see a lot of traction around your ideas.” — Nancy

    “We’ve gone so heavily rewarded the analytical mindset that now we can’t back out of that and be dual-modal about being an analytical mindset and then also really having discipline around a creative mindset.” — Nancy

    “There’s a bunch of supporting data, but there’s also all this intuition and other stuff that goes into it. And so I think just learning to accept the ambiguity as part of that human experience, even in business.” — Brian

    “If your software application doesn’t produce meaningful decision support, then you didn’t do anything. The data is just sitting there and it’s not actually activating.” — Brian

    “People can’t draw a direct line from what art class or band does for you, and it’s the first thing that gets cut. Then we complain on the backend when people are working in professional settings that they can’t talk to us.” — Brian


    031 - How Design Helps Enable Repeatable Value on AI, ML, and Analytics Projects with Ganes Kesari Jan 28, 2020

    Ganes Kesari is the co-founder and head of analytics and AI labs at Gramener, a software company that helps organizations tell more effective stories with their data through robust visualizations. He’s also an advisor, public speaker, and author who talks about AI in plain English so that a general audience can understand it. Prior to founding Gramener, Ganes worked at companies like Cognizant, Birlasoft, and HCL Technologies serving in various management and analyst roles.

    Join Ganes and I as we talk about how design, as a core competency, has enabled Gramener’s analytics and machine learning work to produce better value for clients. We also touched on:

    • Why Ganes believes the gap between the business and data analytics organizations is getting smaller
    • How AI (and some other buzzwords) are encouraging more and more investments in understanding data
    • Ganes’ opinions about the “analytics translator” role
    • How companies might think they are unique for not using “traditional agile”—when in fact that’s what everyone is doing
    • Ganes’ thoughts on the similarities of use cases across verticals and the rise of verticalized deep data science solutions
    • Why Ganes believes organizations are increasingly asking for repeatable data science solutions
    • The pivotal role that empathy plays in convincing someone to use your software or data model
    • How Ganes’ team approaches client requests for data science projects, the process they follow to identify use cases for AI, and how they use AI to identify the biggest business problem that can be solved
    • What Ganes believes practitioners should consider when moving data projects forward at their organizations
    Resources and Links

    Gramener.com

    Ganes Kesari on Twitter: @Kesaritweets

    Ganes Kesari on LinkedIn: https://www.linkedin.com/in/ganes-kesari/

    Quotes from Today’s Episode

    “People tend to have some in-house analytics capability. They’re reaching out for design. Then it’s more of where people feel that the adoption hasn’t happened. They have that algorithm but no one understands its use. And then they try to buy some license or some exploratory visualization tools and they try their hand at it and they’ve figured out that it probably needs a lot more than some cute charts or some dashboards. It can’t be an afterthought. That’s when they reach out.” — Ganes

    “Now a lot more enquiries, a lot more engagements are happening centrally at the enterprise level where they have realized the need for data science and they want to run it centrally so it’s no longer isolated silos.” — Ganes

    “I see that this is a slightly broader movement where people are understanding the value of data and they see that it is something that they can’t avoid or they can’t prioritize it lower anymore.“ — Ganes

    “While we have done a few hundred consulting engagements and help with bespoke solutions, there is still an element of commonality. So that’s where we abstracted some of those, the common or technology requirements and common solutions into our platform.” — Ganes

    “My general perception is that most data science and analytics firms don’t think about design as a core competency or part of analytics and data science—at least not beyond perhaps data visualization.” —Brian

    “I was in a LinkedIn conversation today about this and some comments that Tom Davenport had made on this show a couple of episodes ago. He was talking about how we need this type of role that goes out and understands how data is used and how systems and software are used such that we can better align the solutions with what people are doing. And I was like, ‘amen.’ That’s actually not a new role though; it’s what good designers do!” — Brian


    030 - Using AI to Recommend Personalized Medical Treatment Options with Joost Zeeuw of Pacmed Jan 14, 2020

    Joost Zeeuw is a data scientist and product owner at Pacmed, a data-driven healthcare and AI startup in Amsterdam that combines medical expertise and machine learning to create stronger patient outcomes and improve healthcare experiences. He’s also taught a number of different subjects—like physics, chemistry, and mathematics—at Lyceo, an online education service, and Luzac College in the Netherlands.

    Join Brian and Joost as they discuss the role of design and user experience within the context of providing personalized medical treatments using AI. Plus:

    • The role data has in influencing doctors’ decisions—without making the decisions
    • The questions Joost’s product team asks before designing any AI solution at Pacmed
    • How people’s familiarity with iPhones and ease-of-use has influenced expectations around simplicity—and the challenges this poses when there is machine learning under the hood
    • Why Brian thinks Pacmed’s abnormal approach to design is great—and what that approach looks like
    • The simple, non-technical, but critical thing Pacmed did early on to help them define their AI product strategy and avoid going down the wrong path
    • An example of an unexpected treatment prediction that Pacmed’s algorithm detected—which ended up being something that a specific field of medicine had been studying with classical research techniques 10,000 km away
    • Where Joost believes Western medicine falls short with respect to new drug trials
    Resources and Links
    • Joost on LinkedIn
    • Pacmed.ai on LinkedIn
    Quotes for Today’s Episode

    “Pacmed in that has a three-fold mission, which is, first of all, to try to make sure that every single patient gets the treatment that has proven to work for him or her based on prior data analysis. And next to that we say, ‘well, if an algorithm can learn all these awesome insights generated by thousands and thousands of doctors, then a doctor using one of those products is also very capable of learning more and more things from the lessons that are incorporated in this algorithm and this product.’ And finally, healthcare is very expensive. We are trying to maximize the efficiency and the effectiveness of that spend by making sure everybody gets a treatment that has the highest probability of working for him or her.” — Joost

    “Offering a data product like this is really another tool in that toolbox that allows the doctor to pierce through this insane amount of complexity that there is in giving care to a patient.” — Joost

    “Before designing anything, we ask ourselves this: Does it fit into the workflow of people that already have maybe one of the most demanding jobs in the world?” — Joost

    “There’s a very big gap between what is scientifically medically interesting and what’s practical in a healthcare system.” — Joost

    “When I talk about design here, I’m talking kind of about capital D design. So product design, user experience, looking at the whole business and the outcomes we’re trying to drive, it’s kind of that larger picture here.” — Brian

    “I don’t think this is ‘normal’ for a lot of people coming from the engineering side or from the data science side to be going out and talking to customers, thinking about like how does this person do their job and how does my work fit into you know a bigger picture solution of what this person needs to do all day, and what are the health outcomes we’re going for? That part of this product development process is not about data science, right? It’s about the human factors piece, about how does our solution fit into this world.” — Brian

    “I think that the impact of bringing people out into the field—whatever that is, that could be a corporate cubicle somewhere, a hospital, outside in a farm field—usually there’s a really positive thing that happens because I think people are able to connect their work with an actual human being that’s going to potentially use this solution. And when we look at software all day, it’s very easy to disconnect from any sense of human connection with someone else.” — Brian

    “If you’re a product owner or even if you’re more on the analytics side, but you’re responsible for delivering decision support, it’s really important to go get a feel for what people are doing all day.” — Brian


    029 - Why Google Believes it’s Critical to Pair Designers with Your Data Scientists to Produce Human-Centered ML & AI Products with Di Dang Dec 31, 2019

    Di Dang is an emerging tech design advocate at Google and helped lead the creation of Google’s People + AI Guidebook. In her role, she works with product design teams, external partners, and end users to support the creation of emerging tech experiences. She also teaches a course on immersive technology at the School of Visual Concepts. Prior to these positions, Di worked as an emerging tech lead and senior UX designer at POP, a UX consultant at Kintsugi Creative Solutions, and a business development manager at AppLift. She earned a bachelor of arts degree in philosophy and religious studies from Stanford University.

    Join Brian and Di as they discuss the intersection of design and human-centered AI and:

    • Why a data science leader should care about design and integrating designers during a machine-learning project, and the impacts when they do not
    • What exactly Di does in her capacity as an emerging tech design advocate at Google and the definition of human-centered AI
    • How design helps data science teams save money and time by elucidating the problem space and user needs
    • The two key purposes of Google’s People + AI Research (PAIR) team
    • What Google’s triptych methodology is and how it helps teams prevent building the wrong solution
    • A specific example of how user research and design helped ship a Pixel 2 feature
    • How to ensure an AI solution is human-centered when a non-tech company wants to build something but lacks a formal product manager or UX lead/resource
    • The original goals behind the creation of Google’s People + AI Guidebook
    • The role vocabulary plays in human-centered AI design
    Resources and Links

    Twitter: @Dqpdang

    Di Dang’s Website

    Di Dang on LinkedIn

    People + AI Guidebook

    Quotes from Today’s Episode

    “Even within Google, I can’t tell you how many times I have tech leaders, engineers who kind of cock an eyebrow at me and ask, ‘Why would design be involved when it comes to working with machine learning?’” — Di

    “Software applications of machine learning is a relatively nascent space and we have a lot to learn from in terms of designing for it. The People + AI Guidebook is a starting point and we want to understand what works, what doesn’t, and what’s missing so that we can continue to build best practices around AI product decisions together.” — Di

    “The key value proposition that design brings is we want to work with you to help make sure that when we’re utilizing machine learning, that we’re utilizing it to solve a problem for a user in a way that couldn’t be done through other technologies or through heuristics or rules-based programming—that we’re really using machine learning where it’s most needed.” — Di

    “A key piece that I hear again and again from internal Google product teams and external product teams that I work with is that it’s very, very easy for a lot of teams to default to a tech-first kind of mentality. It’s like, ‘Oh, well you know, machine learning, should we ML this?’ That’s a very common problem that we hear. So then, machine learning becomes this hammer for which everything is a nail—but if only a hammer were as easy to construct as a piece of wood and a little metal anvil kind of bit.” — Di

    “A lot of folks are still evolving their own mental model around what machine learning is and what it’s good for. But closely in relation—because this is something that I think people don’t talk as much about maybe because it’s less sexy to talk about than machine learning—is that there are often times a lot of organizational or political or cultural uncertainties or confusion around even integrating machine learning.” — Di

    “I think there’s a valid promise that there’s a real opportunity with AI. It’s going to change businesses in a significant way and there’s something to that. At the same time, it’s like go purchase some data scientists, throw them in your team, and have them start whacking stuff. And they’re kind of waiting for someone to hand them a good problem to work on and the business doesn’t know and they’re just saying, ‘What is our machine learning strategy?’ And so someone in theory hopefully is hunting for a good problem to solve.” — Brian

    “Everyone’s trying to move fast all the time and ship code and a lot of times we focus on the shipping of code and the putting of models into production as our measurement—as opposed to the outcomes that come from putting something into production.” — Brian

    “The difference between the good and the great designer is the ability to merge the business objectives with ethically sound user-facing and user-centered principles.” — Brian


    028 - Cole Nussbaumer Knaflic On Data Storytelling, DataViz, and Why Your Data May Not Be Inspiring Action Dec 17, 2019

    When it comes to telling stories with data, Cole Nussbaumer Knaflic is ahead of the curve. In October 2015, she wrote a best-selling book called storytelling with data: a data visualization guide for business professionals. That book led to the creation of storytelling with data, an agency that helps businesses communicate more effectively using data, and she’s since followed-up with another best-seller: storytelling with data: let’s practice! Prior to her current role, Cole served as a people analytics manager at Google, was the owner and chief consultant at Insight Analytics, and held several positions at Washington Mutual, among other positions.

    In our chat, we covered:

    • Why sharp communication skills are integral to telling stories with data
    • The skills data people need to effectively communicate with data
    • Who Cole thinks you should run your presentations by first, and the specific colleagues you should be sharing them with
    • Why it’s important to practice presentations in informal settings first
    • How looking at data in different formats can help you build more effective presentations
    • The differences between exploratory and explanatory data analysis in the context of storytelling
    • The important role of diction when presenting data
    • Cole’s opinions on the skills many modern workers need around data storytelling
    • Why data visualization and the ability to tell stories is not a nice-to-have skill
    • What Cole’s approach to preparing for a presentation looks like and the format she uses to kick off the process
    Resources and Links

    Designingforanalytics.com/seminar

    Twitter: @Storywithdata.

    Company website: Storytellingwithdata.com

    Quotes from Today’s Episode

    “I've always really enjoyed that space where numbers and business intersect and enjoy how we can use numbers to get to understand things better and make smarter decisions.” — Cole

    “If you're the one analyzing the data, you know it best. And you're actually in a unique position to be able to derive and help others derive greater value from that data. But in order to do that, you have to be able to talk to other people about it and communicate what you've done to technical audiences and to non-technical audiences.” — Cole

    “When it comes to communicating effectively with data, you can't take out the human part. That's the part where things can either succeed or fail.” — Cole

    “There's no single right way to show data. Any data can be graphed a ton of different ways. And so when we're thinking about how we visualize our data, it really means stepping back and thinking about what sort of behavior we’re trying to drive in our audience. What do we want them to see in this? And then it often means iterating through different views of the data, which is also a fantastic way just to get to know your data better because different views will make observations easier or less easy to see.” — Cole

    “As soon as we try to draw attention to one aspect of the data or another, it actually makes any other potential takeaways harder to see.” — Cole

    “Words are very important for making our data accessible and understandable.” — Cole

    “Depending on the visualization, what you're doing is you're teaching yourself not to assume that the information is necessarily clear. You're being objective. And it sounds like a dumb question, but that's kind of what I usually recommend to my clients: We need to be really objective about our assumptions about what's being communicated here and validate that.” — Brian

    “The low-fidelity format—especially if you're working with a stakeholder or perhaps someone who's going to be the recipient—enables them to give you honest feedback. Because the more polished that sucker looks, the less they're going to want to give you any.” — Brian


    027 - Balancing Your Inner Data Science Nerd While Becoming a Trusted Business Advisor and Strategist with Angela Bassa of iRobot Dec 03, 2019

    Angela Bassa is the director of data science and head of data science and machine learning at iRobot, a technology company focused on robotics (you might have clean floors thanks to a Roomba). Prior to joining iRobot, Angela wore several different hats, including working as a financial analyst at Morgan Stanley, the senior manager of big data analytics and platform engineering at EnerNOC, and even a scuba instructor in the U.S. Virgin Islands.

    Join Angela and I as we discuss the role data science plays in robotics and explore:

    • Why Angela doesn’t believe in a division between technical and non-technical skill
    • Why Angela came to iRobot and her mission
    • What data breadcrumbs are and what you should know about them
    • The skill Angela believes matters most when turning data science into a producer of decision support
    • Why the last mile of the UX is often way longer than one mile
    • The critical role expectation management plays in data science, how Angela handles delivering surprise findings to the business, and the marketing skill she taps to help her build trust
    Resources and Links

    Twitter: @AngeBassa

    Angela’s Website

    iRobot

    Designing for Analytics Seminar

    Quotes from Today’s Episode

    “Because these tools that we use sometimes can be quite sophisticated, it’s really easy to use very complicated jargon to impart credibility onto results that perhaps aren’t merited. I like to call that math-washing the result.” — Angela

    “Our mandate is to make sure that we are making the best decisions—that we are informing strategy rather than just believing certain bits of institutional knowledge or anecdotes or trends. We can actually sort of demonstrate and test those hypotheses with the data that is available to us. And so we can make much better informed decisions and, hopefully, less risky ones.” — Angela

    “Data alone isn’t the ground truth. Data isn’t the thing that we should be reacting to. Data are artifacts. They’re breadcrumbs that help us reconstruct what might have happened.” — Angela

    [When getting somebody to trust the data science work], I don’t think the trust comes from bringing someone along during the actual timeline. I think it has more to do with bringing someone along with the narrative.—Angela

    “It sounds like you’ve created a nice dependency for your data science team. You’re seen as a strategic partner as opposed to being off in the corner doing cryptic work that people can’t understand.” — Brian

    “When I talk to data scientists and leaders, they often talk about how technical skills are very easy to measure. You can see them on paper, you can get them in the interview. But there are these other skills that are required to do effective work and create value.” — Brian


    026 - Why Tom Davenport Gives a 2 out of 10 Score To the Data Science and Analytics Industry for Value Creation Nov 19, 2019

    Tom Davenport has literally written the book on analytics. Actually, several of them, to be precise. Over the course of his career, Tom has established himself as the authority on analytics and how their role in the modern organization has evolved in recent years. Tom is a distinguished professor at Babson College, a research fellow at the MIT Initiative on the Digital Economy, and a senior advisor at Deloitte Analytics. The discussion was timely as Tom had just written an article about a financial services company that had trained its employees on human-centered design so that they could ensure any use of AI would be customer-driven and valuable. We discussed their journey and:

    • Why on a scale of 1-10, the field of analytics has only gone from a one to about a two in ten years time
    • Why so few analytics projects actually make it into production
    • Examples of companies who are using design to turn data into useful applications of AI, decision support and product improvements for customers
    • Why shadow IT shouldn’t be a bad word
    • AI moonshot projects vs. MVPs and how they relate
    • Why journey mapping is incredibly useful and important in analytics and data science work
    • How human-centered design and ethnography is the tough work that’s required to turn data into decision support
    • Tom’s new book and his thoughts on the future of data science and analytics
    Resources and Links:
    • Website: Tomdavenport.com
    • LinkedIn: Tom Davenport
    • Twitter: @tdav
    • Designingforanalytics.com/seminar
    • Designingforanalytics.com
    Quotes from Today’s Episode

    “If you survey organizations and ask them, ‘Does your company have a data-driven culture?’ they almost always say no. Surveys even show a kind of negative movement over recent years in that regard. And it's because nobody really addresses that issue. They only address the technology side.” — Tom Eventually, I think some fraction of [AI and analytics solutions] get used and are moderately effective, but there is not nearly enough focus on this. A lot of analytics people think their job is to create models, and whether anybody uses it or not is not their responsibility...We don't have enough people who make it their jobs to do that sort of thing. —Tom I think we need this new specialist, like a data ethnographer, who could sort of understand much more how people interact with data and applications, and how many ways they get screwed up.—Tom I don't know how you inculcate it or teach it in schools, but I think we all need curiosity about how technology can make us work more effectively. It clearly takes some investment, and time, and effort to do it.— Tom TD Wealth’s goal was to get [its employees] to experientially understand what data, analytics, technology, and AI are all about, and then to think a lot about how it related to their customers. So they had a lot of time spent with customers, understanding what their needs were to make that match with AI. [...] Most organizations only address the technology and the data sides, so I thought this was very refreshing.—Tom “So we all want to do stuff with data. But as you know, there are a lot of poor solutions that get provided from technical people back to business stakeholders. Sometimes they fall on deaf ears. They don't get used.” — Brian “I actually had a consultant I was talking to recently who said you know the average VP/director or CDO/CAO has about two years now to show results, and this gravy train may be slowing down a little bit.“ — Brian “One of the things that I see in the kind of the data science and analytics community is almost this expectation that ‘I will be handed a well-crafted and well-defined problem that is a data problem, and then I will go off and solve it using my technical skills, and then provide you with an answer.’” — Brian


    025 - Treating Data Science at IDEO as a Discipline of Design with Dean Malmgren Nov 05, 2019

    Dean Malmgren cut his teeth as a chemical and biological engineer. In grad school, he studied complex systems and began telling stories about them through the lens of data algorithms. That led him to co-found Datascope Analytics, a data science consulting company which was purchased by IDEO, a global design firm. Today, Dean is an executive director at IDEO and helps teams use data to build delightful products and experiences.

    Join Dean and I as we explore the intersection of data science and design and discuss:

    • Human-centered design and why it’s important to data science
    • What it was like for a data science company to get ingested into a design firm and why it’s a perfect match
    • Why data science isn’t always good at creating things that have never existed before
    • Why teams need to prototype rapidly and why data scientists should hesitate to always use the latest tools
    • What data scientists can learn from design team and vice-versa
    • Why data scientists need to talk to end users early and often, and the importance of developing empathy
    • The difference between data scientists and algorithm designers
    • Dean’s opinions on why many data analytics projects fail
    Resources and Links

    Twitter: @DeanMalmgren

    IDEO

    Datascope

    Quotes from Today’s Episode

    “One of the things that we learned very, very quickly, and very early on, was that designing algorithms that are useful for people involves a lot more than just data and code.” — Dean

    “In the projects that we do at IDEO, we are designing new things that don’t yet exist in the world. Designing things that are new to the world is pretty different than optimizing existing processes or business units or operations, which tends to be the focus of a lot of data science teams.” — Dean

    “The reality is that designing new-to-the-world things often involves a different mindset than optimizing the existing things.” — Dean

    “You know if somebody rates a movie incorrectly, it’s not like you’d throw out Siskel and Ebert’s recommendations for the rest of your life. You just might not pay as much attention to them. But that’s very different when it comes to algorithmic recommendations. We have a lot less tolerance for machines making mistakes.” — Dean

    “The key benefit here is the culture that design brings in terms of creating early and getting feedback early in the process, as opposed to waiting you know three, five, six, seven months working on some model, getting it 97% accurate but 10% utilized.” — Brian

    “You can do all the best work in the world. But at the end of the day, if there’s a human in the loop, it’s that last mile or last hundred feet, whatever you want to call it, where you make it or break it.” — Brian

    “Study after study shows that 10 to 20% of big data analytics projects and AI projects succeed. I’ve actually been collecting them as a hobby in a single article, because they keep coming out.” — Brian


    024 - How Empathy Can Reveal a 60%-Accurate Data Science Solution is a Solid Customer Win with David Stephenson, Ph.D. Oct 22, 2019

    David Stephenson, Ph.D., is the author of Big Data Demystified, a guide for executives that explores the transformative nature of big data and data analytics. He’s also a data strategy consultant and professor at the University of Amsterdam. In a previous life, David worked in various data science roles at companies like Adidas, Coolblue, and eBay.

    Join David and I as we discuss what makes data science projects succeed and explore:

    • The non-technical issues that lead to ineffective data science and analytics projects
    • The specific type of communication that is critical to the success of data science and analytics initiatives (and how working in isolation from your stakeholder or business sponsor creates risk))
    • The power of showing value early, starting small/lean, and one way David applies agile to data science projects
    • The problems that emerge when data scientists only want to do “interesting data science”
    • How design thinking can help data scientists and analytics practitioners make their work resonate with stakeholders who are not “data people”
    • How David now relies on design thinking heavily, and what it taught him about making “cool” prototypes nobody cared about
    • What it’s like to work on a project without understanding who’s sponsoring it
    Resources and Links

    DSI Analytics Website

    Connect with David on LinkedIn

    David’s book: Big Data Demystified

    On Twitter: @Stephenson_Data

    Quotes from Today’s Episode

    “You see a lot of solutions being developed very well, which were not designed to meet the actual challenge that the industry is facing.” — David

    “You just have that whole wasted effort because there wasn’t enough communication at inception.” — David

    “I think that companies are really embracing agile, especially in the last few years. They’re really recognizing the value of it from a software perspective. But it’s really challenging from the analytics perspective—partly because the data science and analytics. They don’t fit into the scrum model very well for a variety of reasons.” — David

    “That for me was a real learning point—to understand the hardest thing is not necessarily the most important thing.” — David

    “If you’re working with marketing people, an 80% solution is fine. If you’re working with finance, they really need exact numbers. You have to understand what your target audience needs in terms of precision.” — David

    “I feel sometimes that when we talk about “the business” people don’t understand that the business is a collection of people—just like a government is a collection of real humans doing jobs and they have goals and needs and selfish interests. So there’s really a collection of end customers and the person that’s paying for the solution.” — Brian

    “I think it’s always important—whether you’re a consultant or you’re internal—to really understand who’s going to be evaluating the value creation.”— Brian

    “You’ve got to keep those lines of communication open and make sure they’re seeing the work you’re doing and evaluating and giving feedback on it. Throw this over the wall is a very high risk model.” — Brian


    023 - Balancing AI-Driven Automation with Human Intervention When Designing Complex Systems with Dr. Murray Cantor Oct 08, 2019

    Dr. Murray Cantor has a storied career that spans decades. Recently, he founded Aptage, a company that provides project risk management tools using Bayesian Estimation and machine learning. He’s also the chief scientist at Hail Sports, which focuses on applying precision medicine techniques to sports performance. In his spare time, he’s a consulting mathematician at Pattern Computer, a firm that engineers state-of-the-art pattern recognition solutions for industrial customers.

    Join Murray and I as we explore the cutting edge of AI and cover:

    • Murray’s approach to automating processes that humans typically do, the role humans have in the design phase, and how he thinks about designing affordances for human intervention in automated systems
    • Murray’s opinion on causal modeling (explainability/interpretability), the true stage we are in with XAI, and what’s next for causality in AI models
    • Murray’s opinions about the 737 Max’s automated trim control system interface (or lack thereof) and how it should have been designed The favorite method Murray has for predicting outcomes from small data sets
    • The major skill gaps Murray sees with young data scientists in particular
    • How using science fiction stories can stimulate creative thinking and help kick off an AI initiative successfully with clients, customers and stakeholders
    Resources and Links

    Murray Cantor on LinkedIn

    New York Times Expose article on the Boeing 737 Max

    New Your Times Article on the 737 Max whistleblower

    Quotes from Today’s Episode

    “We’re in that stage of this industrial revolution we’re going through with augmenting people’s ability with machine learning. Right now it’s more of a craft than a science. We have people out there who are really good at working with these techniques and algorithms. But they don’t necessarily understand they’re essentially a solution looking for a problem.” — Murray

    “A lot of design principles are the same whether or not you have AI. AI just raises the stakes.” — Murray

    “The big companies right now are jumping the guns and saying they have explainable AI when they don’t. It’s going to take a while to really get there.” — Murray

    “Sometimes, it’s not always understood by non-designers, but you’re not testing the people. You’re actually testing the system. In fact, sometimes they tell you to avoid using the word test when you’re talking to a participant, and you tell them it’s a study to evaluate a piece of software, or in this case a cockpit, to figure out if it’s the right design or not. It’s so that they don’t feel like they’re a rat in the maze. In reality, we’re studying the maze.” — Brian

    “Really fundamental to understanding user experience and design is to ask the question, who is the population of people who are going to use this and what is their range of capability?” – Murray

    “Take the implementation hats off and come up with a moonshot vision. From the moonshot, you might find out there are these little tangents that are actually feasible increments. If you never let yourself dream big, you’ll never hit the small incremental steps that you may be able to take.” — Brian


    022 - Creating a Trusted Data Science Team That Is Indispensable to the Business with Sep 24, 2019

    Scott Friesen’s transformation into a data analytics professional wasn’t exactly linear. After graduating with a biology degree and becoming a pre-med student, he switched gears and managed artists in the music industry. After that, he worked at Best Buy, eventually becoming their Senior Director of Analytics for the company’s consumer insights unit. Today, Scott is the SVP of Strategic Analytics at Echo Global Logistics, a provider of technology-enabled transportation and supply chain management services. He also advises for the International Institute for Analytics.

    In this episode, Scott shares what he thinks data scientists and analytics leaders need to do to become a trustworthy and indispensable part of an organization. Scott and I both believe that designing good decision support applications and creating useful data science solutions involve a lot more than technical knowledge. We cover:

    • Scott’s trust equation, why it’s critical for analytics professionals, and how he uses it to push transformation across the organization
    • Scott’s “jazz” vs “classical” approach to creating solutions
    • How to develop intimacy and trust with your business partners (e.g., IT) and executives, and the non-technical skills analytics teams need to develop to be successful
    • Scott’s opinion about design thinking and analytics solutions
    • How to talk about risk to business stakeholders when deploying data science solutions
    • How the success of Scott’s new pricing model was impeded by something that had nothing to do with the data—and how he addressed it
    • Scott’s take on the emerging “analytics translator” role
    • The two key steps to career success—and volcanos
    Resources and Links

    Scott Friesen on LinkedIn

    Quotes from Today's Episode

    “You might think it is more like classical music, but truly great analytics are more like jazz. ” — Scott

    “If I'm going to introduce change to an organization, then I'm going to introduce perceived risk. And so the way for me to drive positive change—the way for me to drive adding value to the organizations that I'm a part of—is the ability to create enough credibility and intimacy that I can get away with introducing change that benefits the organization.” — Scott

    “I categorize the analytic pursuit into three fundamental activities: The first is to observe, the second is to relate, and the third is to predict. ” — Scott

    “It's not enough to just understand the technology part and how to create great models. You can get all that stuff right and still fail in the last mile to deliver value.” — Brian

    “I tend to think of this is terms of what you called ‘intimacy.’ I don’t know if you equate that to empathy, which is really understanding the thing you are talking about from the perspective of the other person. When we do UX research, the questions themselves are what form this intimacy. An easy way to do that is by asking open-ended questions that require open-ended answers to get that person to open up to you. ” — Brian


    021 - Turning Complex Cloud IT Data Into Useful Decision Support Info with John Purcell of Sep 10, 2019

    John Purcell has more than 20 years of experience in the technology world. Currently, he’s VP of Products at CloudHealth, a company that helps organizations manage their increasingly complex cloud infrastructure effectively. Prior to this role, he held the same position at SmartBear Software, makers of application performance monitoring solutions. He’s also worn several hats at companies like LogMeIn and Red Bend Software.

    In today’s episode, John and I discuss how companies are moving more and more workloads to the cloud and how John and his team at CloudHealth builds a platform that makes it easy for all users—even non-technical ones—to analyze and manage data in the cloud and control their financial spending. In addition to exploring the overall complexity of using analytics to inform the management of cloud environments, we also covered:

    • How CloudHealth designs for multiple personas from the financial analyst to the DevOps operator when building solutions into the product
    • Why John has “maniacal point of view” and “religion” around design and UX and how they have the power to disrupt a market
    • How design helps turn otherwise complex data sets that might require an advanced degree to understand into useful decision support
    • How data can lead to action, and how CloudHealth balances automation vs. manual action for its customers using data to make decisions
    • Why John believes user experience is a critical voice at the table during the very earliest stages of any new analytics/data initiative
    Resources and Links

    Twitter: @PurcellOutdoors

    LinkedIn

    Quotes from Today’s Episode

    “I think that’s probably where the biggest point of complexity and the biggest point of challenge is for us: trying to make sure that the platform is flexible enough to be able to inject datasets we’ve never seen before and to be able to analyze and find correlations between unknown datasets that we may not have a high degree of familiarity with—so that we can generate insight that’s actionable, but deliver it in a way that’s [easy for anyone to understand].” — John

    “My core philosophy is that you need UX at the table early and at every step along the way as you’re contemplating product delivery, and I mean all the way upstream at the strategic phase, at the identification of what you want to go tackle next including product strategy, pain identification, persona awareness, and who are we building for—all the way through solving the problem, what should the product be capable of, and user validation.” — John

    “in the cloud, we’re just at the very early stages of [automation based on analytics] from a pure DevOps point of view. We’re still in the world of show me your math. Show me why this is the recommendation you’re making.” — John

    “When making decisions using data, some IT people don’t like the system taking action without them being involved because they don’t trust that any product would be smart enough to make all the right decisions, and they don’t want applications going down.” — Brian

    “I think the distinction you made between what I would call user interface design, which is the surface layer, buttons, fonts, colors, all that stuff often gets conflated in the world of analytics as being, quote ‘design.’ And as I think our audience is hearing from John here, is that it [design] goes much beyond that. It can get into something like, ‘how do you design a great experience around API documentation? Where’s the demo code? How do I run the demo?’ All of that can definitely be designed as well.” — Brian

    “I hear frequently in my conversations with clients and people in the industry that there are a lot of data scientists who just want to use the latest models, and they want to work on model quality and predictive accurateness, etc. But they’re not thinking about how someone is going to use this model to make a decision, and whether will there be some business value created at the end.” — Brian


    020 - How Human-Centered Design Increases Engagement with Data Science Initiatives Aug 27, 2019

    Ahmer Inam considers himself an evangelist of data science who’s been “doing data science since before it was called data science. With more than 20 years of leadership experience in the data science and analytics field at companies including Nike and Cambia health, Ahmer knows a thing or two about what makes data science projects succeed—and what makes them fail.

    In today’s episode, Ahmer and I discuss his experiences using design thinking and his “human-centered AI” process to ensure that internal analytics and data science initiatives actually produce usable, useful outputs that turn into business value. Much of this was formed while Ahmer was a Senior Director and Head of Advanced Analytics at Nike, a company that is known as a design-mature organization. We covered:

    • The role of empathy in data science teams and how it helps data people connect with non-technical users who may not welcome “yet another IT tool”
    • Ahmer’s thoughts on Lean Coaching, Scrum Teams, and getting outside help to accelerate the design and creation of your first data products and predictive models
    • The role of change management in the process of moving data products into production
    • Ahmer’s two-week process to kick-start data product initiatives used at Nike
    • How model accuracy isn’t as important early on as other success metrics when prototyping solutions with customers
    Resources and Links

    How Analytics Are Informing Change At Nike

    LinkedIn

    Quotes from Today’s Episode

    “Build data products with the people, for the people…and bring a sense of vulnerability to the table.” — Ahmer

    “What I have seen is that a lot of times we can build models, we can bring the best of the technologies on optimal technology it’s in the platforms, but in the end, if the business process and the people are not ready to take it and use it, that’s where it fails.” — Ahmer

    “If we don’t understand people in the process, essentially, the adoption is not going to work. In the end, when it comes to a lot of these data science exercises or projects or development of data products, we have to really think about it as a change management exercise and nothing short of that.” — Ahmer

    “Putting humans at the center of these initiatives drives better value and it actually makes sure that these tools and data products that we’re making actually get used, which is what ultimately is going to determine whether or not there’s any business value—because the data itself doesn’t have any value until it’s acted upon.” — Brian

    “One of these that’s been stuck in my ear like an earworm is that a lot of the models fail to get to production still. And so this is the ongoing theme of basically large analytics projects, whether you call it big data analytics or AI, it’s the same thing. We’re throwing a lot of money at these problems, and we’re still creating poor solutions that end up not doing anything.” — Brian

    “I think the really important point here is that early on with these initiatives, it’s important to figure out, What is going to stop this person from potentially engaging with my service?” — Brian


    019 - The Non-Technical (Human!) Challenges that Can Impede Great Data Science Solutions Aug 13, 2019

    Dr. Bob Hayes, will be the first to tell you that he’s a dataphile. Ever since he took a stats course in college in the 80s, Bob’s been hooked on data. Currently, Bob is the Research and Analytics Director at Indigo Slate. He’s also the president of Business over Broadway, a consultancy he founded in 2007.In a past life, Bob served as Chief Research Officer at Appuri and AnalyticsWeek, Chief Customer Officer at TCELab, and a contributing analyst at Gleanster, among many other roles.

    In today’s episode, Bob and I discuss a recent Kaggle survey that highlighted several key non-technical impediments to effective data science projects. In addition to outlining what those challenges are and exploring potential solutions to them, we also covered:

    • The three key skills successful data science teams have
    • Why improving customer loyalty involves analyzing several metrics, not just one
    • Why Bob feels the scientific method is just as important today as it’s been for hundreds of years
    • The importance of repeatable results
    • How prototyping early can save time and drive adoption of data science projects
    • Bob’s advice on how to move data science projects forward (and one key skill he feels most business leaders lack)
    • The role of the analytics translator
    Resources and Links:

    Dr. Bob Hayes on LinkedIn

    Seeing Theory

    Calling Bullshit

    Doctor Bob Hayes on Twitter

    Business Over Broadway

    IndigoSlate

    Quotes from Today’s Episode

    “I’ve always loved data. I took my first stats course in college over 30 years ago and I was hooked immediately. I love data. Sometimes I introduce myself as a dataholic. I love it.” — Bob

    “I’m a big fan of just kind of analyzing data, just getting my hands on data, just exploring it. But that can lead you down a path of no return where you’re just analyzing data just to analyze it. What I try to tell my clients is that when you approach a data set, have a problem that you’re trying to solve. The challenge there I think it stems from the fact that a lot of data science teams don’t have a subject matter expert on the team to pose the right questions.” — Bob

    “The three findings that I found pretty interesting were, number one, a lack of a clear question to be answering or a clear direction to go in with the available data. The second one was that data science results were not used by the business decision makers. And the third one was an inability to integrate findings into the organization’s decision making processes.” — Brian

    “It makes you wonder,‘if you didn’t have a good problem to solve, maybe that’s why [the findings] didn’t get used in the first place.’” — Brian

    “That part isn’t so much the math and the science. That’s more the psychology and knowing how people react. Because you’re going to have certain business stakeholders that still want to kind of shoot from the hip and their experience. Their gut tells them something. And sometimes that gut is really informed.” — Brian

    “If executives are looking at data science and AI as a strategic initiative, it seems really funny to me that someone wouldn’t be saying, ‘What do we get out of this? What are the next steps?’ when the data teams get to the end of a project and just moves on to the next one.” — Brian


    018 - The Business Value of Showing the “Why” in AI Models with Jana Eggers (CEO, Naralogics) Jul 30, 2019

    Jana Eggers, a self-proclaimed math and computer nerd, is CEO of Nara Logics, a company that helps organizations use AI to eliminate data silos and unlock the full value of their data, delivering predictive personalized experiences to their customers along the way. The company leverages the latest neuroscience research to model data the same way our brains do. Jana also serves on Fannie Mae’s digital advisory board, which is tasked with finding affordable housing solutions across the United States. Prior to joining Nara Logics, Jana wore many different hats, serving as CEO of Spreadshirt, and General Manager of QuickBase at Intuit, among other positions. She also knows about good restaurants in PDX!

    In today’s episode, Jana and I explore her approaches to using AI to help enterprises make interesting and useful predictions that drive better business outcomes and improve customer experience. In addition to discussing how AI can help strengthen personalization and support smarter decision making, we also covered:

    • The power of showing the whys when providing predictions (i.e., explainable AI or XAI).
    • Jana’s thoughts on why some data scientists struggle with inflated expectations around AI
    • Brian’s #facepalm about lipstick and data
    • The power of what-if simulations and being able to remove factors from predictions
    • The power of context and how Nara Logics weighs recent data vs. long-term historical data in its predictions
    • How Nara Logics leverages the wiring of the brain—the connectome—to inspire the models they build and the decision support help they provide to customers
    • Why AI initiatives need to consider the “AI trinity”: data, the algorithm, and the results an organization is aiming for
    Resources and Links:

    Nara Logics

    Follow Jana on Twitter

    Connect with Jana on LinkedIn

    Quotes from Today’s Episode

    “We have a platform that is really built for decision support. How do you go from having […]20 to having about 500 to 2,000 decision factors coming in? Once we get that overload of information, our tool is used to help people with those decisions. And yes, we’re using a different approach than the traditional neural net, which is what deep learning is based on. While we use that in our tool, we’re more on the cognitive side. […]I’ve got a lot of different signals coming in, how do I understand how those signals relate to each other and then make decisions based on that?” — Jana

    “One of the things that we do that also stands us apart is that our AI is transparent—meaning that when we provide an answer, we also give the reasons why that is the right answer for this context. We think it is important to know what was taken into account and what factors weigh more heavily in this context than other contexts.” — Jana

    “It is extremely unusual—and I can even say that I’ve never really seen it—that people just say, Okay, I trust the machine. I’m comfortable with that. It knows more than me. That’s really unusual. The only time I’ve seen that is when you’re really doing something new and no one there has any idea what it should be.” — Jana

    “With regards to tech answering “why,” I’ve worked on several monitoring and analytics applications in the IT space. When doing root cause analysis, we came up with this idea of referring to monitored objects as being abnormally critical and normally critical. Because at certain times of day, you might be running a backup job and so the IO is going crazy, and maybe the latency is higher. But the IO is supposed to be that way at that time. So how do you knock down that signal and not throw up all the red flags and light up the dashboard when it’s supposed to be operating that way? Answering “why” is difficult. ” — Brian

    “We’ve got lipstick, we’ve got kissing. I’m going to get flagged as ‘parental advisory’ on this episode in iTunes probably. ;-)” — Brian

    “You can’t just live in the closet and do your math and hope that everyone is going to see the value of it. Anytime we’re building these complex tools and services —what I call human-in-the-loop applications–you’re probably going to have to go engage with other humans, whether it’s customers or your teammates or whatever.” — Brian


    017 - John Cutler on Productizing Storytelling Measuring What Matters & Analytics Product Management Jul 16, 2019

    John Cutler is a Product Evangelist for Amplitude, an analytic platform that helps companies better understand users behavior, helping to grow their businesses. John focuses on user experience and evidence-driven product development by mixing and matching various methodologies to help teams deliver lasting outcomes for their customers. As a former UX researcher at AppFolio, a product manager at Zendesk, Pendo.io, AdKeeper and RichFX, a startup founder, and a product team coach, John has a perspective that spans individual roles, domains, and products.

    In today’s episode, John and I discuss how productizing storytelling in analytics applications can be a powerful tool for moving analytics beyond vanity metrics. We also covered the importance of understanding customers’ jobs/tasks, involving cross-disciplinary teams when creating a product/service, and:

    • John and Amplitude’s North Star strategy and the (3) measurements they care about when tracking their own customers’ success
    • Why John loves the concept of analytics “notebooks” (also a particular feature of Amplitude’s product) vs. the standard dashboard method
    • Understanding relationships between metrics through “weekly learning users” who share digestible content
    • John’s opinions on involving domain experts and cross-discipline teams to enable products focused on outcomes over features
    • Recognizing whether your product/app is about explanatory or exploratory analytics
    • How Jazz relates to business – how you don’t know what you don’t know yet
    Resources and Links:

    Connect with John on LinkedIn

    Follow John on Twitter

    Keep up with John on Medium

    Amplitude

    Designing for Analytics

    Quotes from Today’s Episode

    “It’s like you know in your heart you should pair with domain experts and people who know the human problem out there and understand the decisions being made. I think organizationally, there’s a lot of organizational inertia that discourages that, unfortunately, and so you need to fight for it. My advice is to fight for it because you know that that’s important and you know that this is not just a pure data science problem or a pure analytics problem. There’s probably there’s a lot of surrounding information that you need to understand to be able to actually help the business.” – John

    “We definitely ‘dogfood’ our product and we also ‘dogfood’ the advice we give our customers.” – John

    “You know in your heart you should pair with domain experts and people who know the human problem out there and understand the decisions being made. […] there’s a lot of organizational inertia that discourages that, unfortunately, and so you need to fight for it. I guess my advice is, fight for it, because you know that it is important, and you know that this is not just a pure data science problem or a pure analytics problem.” – John

    “It’s very easy to create assets and create code and things that look like progress. They mask themselves as progress and improvement, and they may not actually return any business value or customer value explicitly. We have to consciously know what the outcomes are that we want.” – Brian

    “We got to get the right bodies in the room that know the right questions to ask. I can smell when the right questions aren’t being asked, and it’s so powerful” – Brian

    “Instead of thinking about what are all the right stats to consider, [I sometimes suggest teams] write in plain English, like in prose format, what would be the value that we could possibly show in the data.’ maybe it can’t even technically be achieved today. But expressing the analytics in words like, ‘you should change this knob to seven instead of nine because we found out X, Y, and Z happened. We also think blah, blah, blah, blah, blah, and here is how we know that, and there’s your recommendation.’ This method is highly prescriptive, but it’s an exercise in thinking about the customer’s experience.” – Brian


    016 - Farming with Data: How Advanced Analytics are Transforming the Agriculture Industry with Dinu Ajikutra Jul 02, 2019

    Today we are joined by Dinu Ajikutira, VP of Product at CiBO Technologies. CiBO Technologies was founded in 2015. It was created to provide an objective, scientifically-driven insights in support of farmland economics. Dinu is currently leading an effort to productize what I found to be some very impressive analytically-driven simulation capabilities to help farmers and agronomists. Specifically, CiBO’s goal is to provide a software service that uses mapping and other data to predictively model a piece of land’s agricultural value –before crops are ever planted. In order to build a product that truly meets his customer needs, Dinu goes the extra mile–in one case, 1000 miles– to literally meet his customers in the field to understand their pain points.

    On this episode, Dinu and I discuss how CiBO will help reduce farmers’ risk, optimize crop yields, and the challenges of the agriculture industry from a data standpoint. We also discussed:

    • Farmers’ interactions with data analytics products and how to improve their trust with those products
    • Where CiBO’s software can be used and who would benefit from it
    • Dinu’s “ride-along” experience visiting farmers and agronomists in the midwest to better understand customer needs and interactions with the tool
    • What Dinu has learned about farmers’ comfort using technology
    • The importance of understanding seasonality
    • The challenges of designing the tool for the various users and building user interfaces based on user needs
    • The biggest product challenges in the ag tech field and how CiBO handles those challenges
    Resources and Links:

    LinkedIn

    CiBO Technologies

    Experiencing Data Podcast

    Quotes from Today’s Episode

    “CiBO was built on a mission of enabling sustainable agriculture, and we built this software platform that brings weather, soil, topography, and agronomic practices in combination with simulation to actually digitally grow the plant, and that allows us to explain to the users why something occurs, what if something different had happened, and predict the outcomes of how plants will perform in different environments.” — Dinu Ajikutira

    “The maturity of the agricultural industry [with regards] to technology is in its early stages, and it’s at a time when there is a lot of noise around AI,machine learning and data analytics. That makes it very complicated, because you don’t know if the technology really does what it claims to do, and there is a community of potential users that are not used to using a high-tech technology to solve their problems.” — DInu Ajikutira

    “In agriculture, the data is very sparse, but with our software we don’t need all the data. We can supplement data that is missing, using our simulation tools, and be able to predict weather outcomes that you have not experienced in the past.” — Dinu Ajikutira

    “To add clarity, you need to add information sometimes, and the issue isn’t always the quantity of the information; it’s how it’s designed.I’ve seen this repeatedly where there are times if you properly add information and design it well, you actually bring a lot more insight.” – Brian O’Neill

    “Sometimes the solution is going to be to add information, and if you’re feeling like you have a clutter problem, if your customers are complaining about too much information, or that’s a symptom usually that the design is wrong. It’s not necessarily that that data has no value. It may be the wrong data.” — Brian O’Neill


    015 – Opportunities and Challenges When Designing IoT Analytics Experiences for the Industrial & Manufacturing Industries with CEO Bill Bither Jun 18, 2019

    Bill Bither, CEO and Co-Founder of MachineMetrics, is a serial software entrepreneur and a manufacturing technology leader. He founded and bootstrapped Atalasoft to image-enable web applications which led to a successful exit in 2011 to Kofax. In 2014, he co-founded MachineMetrics to bring visibility and predictability to the manufacturing floor with an Industrial IoT analytics platform that collects data from machines. This data is used to benchmark performance, drive efficiency, improve equipment uptime, and enable automation.

    Today, join us as we discuss the various opportunities and challenges in the complex world of industrial IoT and manufacturing. Bill and I discuss the importance of visualizations and its relationship to improving efficiency in manufacturing, how talking to machine operators help add context to analytics data and even inform UI/UX decisions, as well as how MachineMetrics goes about making the telemetry from these machines useful to the operators.

    We also covered:

    • How improving a customer’s visibility into CNC machines helped reveal accurate utilization rates and improved efficiency
    • How simple visualizations make a tangible difference in operational performance
    • Bill’s model for the 4 different phases of analytics
      • Descriptive
      • Diagnostic
      • Predictive
      • Prescriptive
    • Mistakes Bill learned early on about product dev in the IIoT analytics space
    • What Bill learned from talking to customers that ended up identifying a major design flaw his team wasn’t aware of
    • The value you can glean from talking to customers
    • MachineWorks’ challenges with finding their market fit and aligning their product around customer’s needs
    • How MachineMetrics has learned to simplify the customer’s analytics experience
    Resources and Links

    Bill Bither on LinkedIn

    MachineMetrics

    Quotes from Today’s Episode

    “We have so much data, but the piece that really adds enormous value is human feedback.” — Bill

    “Simplicity is really hard. It takes time because it requires empathy and it requires going in and really getting into the head or the life of the person that’s gonna use your tool. You have to understand what’s it like being on a shop floor running eight different CNC machines. If you’ve never talked to someone, it’s really hard to empathize with them.” — Brian

    “In all the work that we do, in adding more intelligence to the product, it’s just making the experience simpler and simpler.” — Bill

    “You don’t have to go in and do great research; you can go in and just start doing research and learn on the way. It’s like going to the gym. They always tell you, ‘It doesn’t matter what exercise you do, just go and start.’ …then you can always get better at making your workout optimal.” — Brian

    “It’s really valuable to have routine visits with customers, because you just don’t know what else might be going on.” — Brian

    “The real value of the research is asking ‘why’ and ‘how,’ and getting to the root problem. That’s the insight you want. Customers may have some good design ideas, but most customers aren’t designers. … Our job is to give people what they need.” — Brian


    014 - How Worthington Industries Makes Predictive Analytics Useful from the Steel Mill Floor to the Corner Office with Dr. Stephen Bartos Jun 04, 2019

    Today we are joined by the analytics “man of steel,” Steve Bartos, the Manager of the Predictive Analytics team in the steel processing division at Worthington Industries. 😉 At Worthington, Steve is tasked with strategically driving impactful analytics wider and deeper across the division and, as part of this effort, helps ensure an educated, supported, and connected analytics community. In addition, Steve also serves as a co-leader of the Columbus Tableau User Group.

    On today’s episode, Steve and I discuss how analytics are providing internal process improvements at Worthington. We also cover the challenges Steve faces designing effective data-rich products, the importance of the “last mile,” and how his PhD in science education shapes his work in predictive analytics.

    In addition, we also talk about:

    • Internal tools that Steve has developed and how they help Worthington Industries.
    • Preplanning and its importance for creating a solution that works for the client.
    • Using analytics to inform daily decisions, aid in monthly meetings, and assist with Kaizen (Lean) focused decisions.
    • How Steve pulls out the meaningful pieces of information that can improve the division’s performance.
    • How Steve tries to avoid Data-Rich and Insight-Poor customer solutions
    • The importance of engaging the customer/user throughout the process
    • How Steve leverages his science education background to communicate with his peers and with executives at Worthington
    Resources and Links

    Twitter: @OlDirtyBarGraph

    Steve Bartos LinkedIn

    Quotes from Today’s Episode

    “Seeing the way analytics can help facilitate better decision making, doesn't necessarily come with showing someone every single question they can possibly answer, waiting for them to applaud how much time and how much energy and effort you'd saved them.” - Steve Bartos

    “It's hard to talk about the influence of different machine parameters on quality if every operator is setting it up based on their own tribal knowledge of how it runs best.” - Steve Bartos

    “I think bringing the question back to the user much more frequently, much sooner and at a much more focused scale has paid dividends”. - Steve Bartos

    “It's getting the people that are actually going to sit and use these interfaces involved in the creation process… they should be helping you define the goals and the problems… by getting them involved, it makes the adoption process a lot easier.” - Brian O’Neill

    “It's real easy to throw up some work that you've done in Tableau around a question that a manager or an executive had. It's real easy to do that. It's really difficult to do that well and have some control of the conversation, being able to say, here's what we did, here was the question, here's the day we use, here's how we analyze it and here's a suggestion where making and now let's talk about why and do that in a way that doesn't lead to an in-the-weeds session and frustration.” - Steve Bartos


    013 - Paul Mattal (Dir. of Network Systems, Akamai) on designing decision support tools and analytics services for the largest CDN on the web May 21, 2019

    Paul Mattal is the Director of Network Systems at Akamai, one of the largest content delivery networks in the U.S. Akamai is a major part of the backbone of the internet and on today’s episode, Paul is going to talk about the massive amount of telemetry that comes into Akamai and the various decision support tools his group is in charge of providing to internal customers. On top of the analytics aspect of our chat, we also discussed how Paul is approaching his team’s work being relatively new at Akamai.

    Additionally, we covered:

    • How does Paul access and use internal customer knowledge to improve the quality of applications they make?
    • When to build a custom decision support tool vs. using a BI tool like Tableau?
    • How does Akamai measure if their analytics are creating customer value?
    • The process Paul uses with the customer to design a new data product MVP
    • How Paul decides which of the many analytics applications and services “get love” when resources are constrained
    • Paul’s closing advice about taking the time to design and plan before you code
    Resources and Links:

    Akamai

    Twitter @pjmattal

    Paul Mattal on LinkedIn

    Paul Mattal on Facebook

    Quotes from Today’s Episode

    “I would say we have a lot of engagement with [customers] here. People jump to answering questions with data and they’re quick. They know how to do that and they have very good ideas about how to make sure that the approaches they take are backed by data and backed by evidence.” — Paul Mattal

    “There’s actually a very mature culture here at Akamai of helping each other. Not necessarily taking on an enormous project if you don’t have the time for it, but opening your door and helping somebody solve a problem, if you have expertise that can help them.” — Paul Mattal

    “I’m always curious about feedback cycles because there’s a lot of places that they start with telemetry and data, then they put technology on top of it, they build a bunch of software, and look at releases and outputs as the final part. It’s actually not. It’s the outcomes that come from the stuff we built that matter. If you don’t know what outcomes those look like, then you don’t know if you actually created anything meaningful.” — Brian O’Neill

    “We’ve talked a little bit about the MVP approach, which is about doing that minimal amount of work, which may or may not be working code, but you did a minimum amount of stuff to figure out whether or not it’s meeting a need that your customer has. You’re going through some type of observation process to fuel the first thing, asset or output that you create. It’s fueled by some kind of observation or research upfront so that when you go up to bat and take a swing with something real, there’s a better chance of at least a base hit.” — Brian O’Neill

    “Pretend to be the new guy for as long as you can. Go ask [about their needs/challenges] again and get to really understand what that person [customer] is experiencing, because I know you’re going to able to meet the need much better.” — Paul Mattal


    012 - Dr. Andrey Sharapov (Data Scientist, Lidl) on explainable AI and demystifying predictions from machine learning models for better user experienc... May 07, 2019

    Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine learning and data product development including analytical planning tools that help with business issues such as stocking and purchasing. Previously, he spent 2 years at Xaxis and he led data science initiatives and developed tools for customer analytics at TeamViewer. Andrey and I met at a Predicitve Analytics World conference we were both speaking at, and I found out he is very interested in “explainable AI,” an aspect of user experience that I think is worth talking about and so that’s what today’s episode will focus on.

    In our chat, we covered:

    • Lidl’s planning tool for their operational teams and what it predicts.
    • The lessons learned from Andrey’s first attempt to build an explainable AI tool and other human factors related to designing data products
    • What explainable AI is, and why it is critical in certain situations
    • How explainable AI is useful for debugging other data models
    • We discuss why explainable AI isn’t always used
    • Andrey’s thoughts on the importance of including your end user in the data production creation process from the very beginning.

    Also, here’s a little post-episode thought from a design perspective:

    I know there are counter-vailing opinions that state that explainability of models is “over-hyped.” One popular rationalization uses examples such as how certain professions (e.g. medical practitioners) make decisions all the time that cannot be fully explained, yet people believe the decision making without necessarily expecting it to be fully explained. The reality is that while not every model or end UX necessarily needs explainability, I think there are human factors that can be satisfied by providing explainability such as building customer trust more rapidly, or helping convince customers/users why/how a new technology solution may be better than “the old way” of doing things. This is not a blanket recommendation to “always include explainability” in your service/app/UI; I think many factors come into play and as with any design choice, I think you should let your customer/user feedback help you decide whether your service needs explainability to be valuable, useful, and engaging.

    Resources and Links:

    Andrey Sharapov on LinkedIn

    Explainable AI- XAI Group (LinkedIn)

    Quotes from Today’s Episode

    “I hear frequently there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work. I also hear how some data scientists may think, ‘explainable AI is not going to improve the model’ or ‘help me get published’ – so maybe that’s responsible for why [explainable AI] is not as widely in use.” – Brian O’Neill

    “When you go and talk to an operational person, who has in mind a certain number of basic rules, say three, five, or six rules [they use] when doing planning, and then when you come to him with a machine learning model, something that is let’s say, ‘black box,’ and then you tell him ‘okay, just trust my prediction,’ then in most of the cases, it just simply doesn’t work. They don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model…” – Andrey Sharapov

    “We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungry, every country, we try to talk to these people [our users] directly. [We] try to get the requirements directly from them and then show the results back to them…” – Andrey Sharapov

    “The sole purpose of the tool we built was to make their work more efficient, in a sense that they could not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a plot for elasticity curves. They could play with the price and see if they made the price too high, too low, and how much the order quantity would change.” – Andrey Sharapov


    011 - Gadi Oren (VP Product, LogicMonitor) on analytics for monitoring applications and looking at declarative analytics as “opinions” Apr 23, 2019

    My guest today is Gadi Oren, the VP of Product for LogicMonitor. Gadi is responsible for the company’s strategic vision and product initiatives. Previously, Gadi was the CEO and Co-Founder of ITculate, where he was responsible for developing world-class technology and product that created contextual monitoring by discovering and leveraging application topology. Gadi previously served as the CTO and Co-founder of Cloudscope and he has a management degree from Sloan MIT.

    Today we are going to talk with Gadi about analytics in the context of monitoring applications. This was a fun chat as Gadi and I have both worked on several applications in this space, and it was great to hear how Gadi is habitually integrating customers into his product development process. You’re also going to hear Gadi’s interesting way of framing declarative analytics as casting “opinions,” which I thought was really interesting from a UX standpoint. We also discussed:

    • How to define what is “normal” for an environment being monitored and when to be concerned about variations.
    • Gadi’s KPI for his team regarding customer interaction and why it is important.
    • What kind of data is needed for effective prototypes
    • How to approach design/prototyping for new vs. existing products
    • Mistakes that product owners make falling in love with early prototypes
    • Interpreting common customer signals that may identify a latent problem needing to be solved in the application
    Resources and Links:

    LogicMonitor Twitter: @gadioren LinkedIn: Gadi Oren

    Quotes from Today’s Episode

    “The barrier of replacing software goes down. Bad software will go out and better software will come in. If it’s easier to use, you will actually win in the marketplace because of that. It’s not a secondary aspect.” – Gadi Oren

    “…ultimately, [not talking to customers] is going to take you away from understanding what’s going on and you’ll be operating on interpolating from information you know instead of listening to the customer.” – Gadi Oren

    “Providing the data or the evidence for the conclusion is a way not to black box everything. You’re providing the human with the relevant analysis and evidence that went into the conclusion and hope if that was modeled on their behavior, then you’re modeling the system around what they would have done. You’re basically just replacing human work with computer work.” — Brian O’Neill

    “What I found in my career and experience with clients is that sometimes if they can’t get it perfect, they’re worried about doing anything at all. I like this idea of [software analytics] casting an opinion.” — Brian O’Neill

    “LogicMonitor’s mission is to provide a monitoring solution that just works, that’s simple enough to just go in, install it quickly, and get coverage on everything you need so that you as a company can focus on what you really care about, which is your business.” — Gadi Oren


    010 - Carl Hoffman (CEO, Basis Technology) on text analytics, NLP, entity resolution, and why exact match search is stupid Apr 09, 2019

    My guest today is Carl Hoffman, the CEO of Basis Technology, and a specialist in text analytics. Carl founded Basis Technology in 1995, and in 1999, the company shipped its first products for website internationalization, enabling Lycos and Google to become the first search engines capable of cataloging the web in both Asian and European languages. In 2003, the company shipped its first Arabic analyzer and began development of a comprehensive text analytics platform. Today, Basis Technology is recognized as the leading provider of components for information retrieval, entity extraction, and entity resolution in many languages. Carl has been directly involved with the company’s activities in support of U.S. national security missions and works closely with analysts in the U.S. intelligence community.

    Many of you work all day in the world of analytics: numbers, charts, metrics, data visualization, etc. But, today we’re going to talk about one of the other ingredients in designing good data products: text! As an amateur polyglot myself (I speak decent Portuguese, Spanish, and am attempting to learn Polish), I really enjoyed this discussion with Carl. If you are interested in languages, text analytics, search interfaces, entity resolution, and are curious to learn what any of this has to do with offline events such as the Boston Marathon Bombing, you’re going to enjoy my chat with Carl. We covered:

    • How text analytics software is used by Border patrol agencies and its limitations.
    • The role of humans in the loop, even with good text analytics in play
    • What actually happened in the case of the Boston Marathon Bombing?
    • Carl’s article“Exact Match” Isn’t Just Stupid. It’s Deadly.
    • The 2 lessons Carl has learned regarding working with native tongue source material.
    • Why Carl encourages Unicode Compliance when working with text, why having a global perspective is important, and how Carl actually implements this at his company
    • Carl’s parting words on why hybrid architectures are a core foundation to building better data products involving text analytics
    Resources and Links:
    • Basis Technology
    • Carl’s article: “Exact Match” isn’t Just Stupid. It’s Deadly.
    • Carl Hoffman on LinkedIn
    Quotes from Today’s Episode

    “One of the practices that I’ve always liked is actually getting people that aren’t like you, that don’t think like you, in order to intentionally tease out what you don’t know. You know that you’re not going to look at the problem the same way they do…” — Brian O’Neill

    “Bias is incredibly important in any system that tries to respond to human behavior. We have our own innate cultural biases that we’re sometimes not even aware of. As you [Brian] point out, it’s impossible to separate human language from the underlying culture and, in some cases, geography and the lifestyle of the people who speak that language…” — Carl Hoffman

    “What I can tell you is that context and nuance are equally important in both spoken and written human communication…Capturing all of the context means that you can do a much better job of the analytics.” — Carl Hoffman

    “It’s sad when you have these gaps like what happened in this border crossing case where a name spelling is responsible for not flagging down [the right] people. I mean, we put people on the moon and we get something like a name spelling [entity resolution] wrong. It’s shocking in a way.” — Brian O’Neill

    “We live in a world which is constantly shades of gray and the challenge is getting as close to yes or no as we can.”– Carl Hoffman


      Related Podcasts

      Conversation Time

      1

      Conversation Time Technology
      The Mob Mentality Show

      2

      The Mob Mentality Show Technology
      Pod Sound School

      3

      Pod Sound School Podcasting
      Ken Burns in Conversation with Stephen Colbert

      4

      Ken Burns in Conversation with Stephen Colbert Technology
      UXpod – User Experience Podcast

      5

      UXpod – User Experience Podcast Software How-To
      Software Testing Podcast

      6

      Software Testing Podcast Technology
      footer-logo

      Contact Us

      Toll Free: 844-670-7747

      Links

      • Home
      • Top Charts
      • Networks
      • Apps
      • Independents Podcasts
      • Podcast Advertising
      • Podcast News
      • Contact Us
      • About Us
      • Analytics & Insights

      Stay Connected

        Privacy, Terms of Use & Our Code of Ethics Protecting Content Creators Copyrights